Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-21 Thread Grant Likely
On Fri, 21 Mar 2014 14:16:20 +0200, Tomi Valkeinen  
wrote:
> On 21/03/14 13:47, Grant Likely wrote:
> 
> > I'm firm on the opinion that the checking must also happen at runtime.
> > The biggest part of my objection has been how easy it would be to get a
> > linkage out of sync, and dtc is not necessarily the last tool to touch
> > the dtb before the kernel gets booted. I want the kernel to flat out
> > reject any linkage that is improperly formed.
> 
> Isn't it trivial to verify it with the current v4l2 bindings? And
> endpoint must have a 'remote-endpoint' property, and the endpoint on the
> other end must have similar property, pointing in the first endpoint.
> Anything else is an error.
> 
> I agree that it's easier to write bad links in the dts with
> double-linking than with single-linking, but it's still trivial to
> verify it in the kernel.

Right, which is exactly what I'm asking for.

g.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-21 Thread Grant Likely
On Fri, Mar 21, 2014 at 12:44 PM, Laurent Pinchart
 wrote:
> Hi Grant,
>
> On Friday 21 March 2014 08:15:34 Grant Likely wrote:
>> Why don't we instead try a Google Hangout or a phone call today.
>> Anywhere between 11:30 and 14:00 GMT would work for me. I'd offer to
>> provide the tea, but I haven't quite perfected transporter technology
>> yet.
>
> That would work for me, but not today I'm afraid. Will you already be in
> California on Monday ?

Yes. Let's try the week after.

g.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-21 Thread Laurent Pinchart
Hi Grant,

On Friday 21 March 2014 08:15:34 Grant Likely wrote:
> On Fri, 21 Mar 2014 00:26:12 +0100, Laurent Pinchart wrote:
> > On Thursday 20 March 2014 23:12:50 Grant Likely wrote:
> > > On Thu, 20 Mar 2014 19:52:53 +0100, Laurent Pinchart wrote:
> > > > Then we might not be talking about the same thing. I'm talking about
> > > > DT bindings to represent the topology of the device, not how drivers
> > > > are wired together.
> > > 
> > > Possibly. I'm certainly confused now. You brought up the component
> > > helpers in drivers/base/component.c, so I thought working out
> > > dependencies is part of the purpose of this binding. Everything I've
> > > heard so far has given me the impression that the graph binding is tied
> > > up with knowing when all of the devices exist.
> > 
> > The two are related, you're of course right about that.
> > 
> > We're not really moving forward here. Part of our disagreement comes in my
> > opinion from having different requirements and different views of the
> > problem, caused by experiences with different kind of devices. This is
> > much easier to solve by sitting around the same table than discussing on
> > mailing lists. I would propose a meeting at the ELC but that's still a
> > bit far away and would delay progress by more than one month, which is
> > probably not acceptable.
> > 
> > I can reply to the e-mail where I've drawn one use case I have to deal
> > with to detail my needs if that can help.
> > 
> > Alternatively the UK isn't that far away and I could jump in a train if
> > you can provide tea for the discussion :-)
> 
> I'm game for that, but it is a long train ride. I'm up in Aberdeen which
> is 8 hours from London by train. Also, I'm travelling next week to
> California (Collaboration summit), so it will have to be in 2 weeks
> time.
> 
> Why don't we instead try a Google Hangout or a phone call today.
> Anywhere between 11:30 and 14:00 GMT would work for me. I'd offer to
> provide the tea, but I haven't quite perfected transporter technology
> yet.

That would work for me, but not today I'm afraid. Will you already be in 
California on Monday ?

-- 
Regards,

Laurent Pinchart

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-21 Thread Tomi Valkeinen
On 21/03/14 13:47, Grant Likely wrote:

> I'm firm on the opinion that the checking must also happen at runtime.
> The biggest part of my objection has been how easy it would be to get a
> linkage out of sync, and dtc is not necessarily the last tool to touch
> the dtb before the kernel gets booted. I want the kernel to flat out
> reject any linkage that is improperly formed.

Isn't it trivial to verify it with the current v4l2 bindings? And
endpoint must have a 'remote-endpoint' property, and the endpoint on the
other end must have similar property, pointing in the first endpoint.
Anything else is an error.

I agree that it's easier to write bad links in the dts with
double-linking than with single-linking, but it's still trivial to
verify it in the kernel.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-21 Thread Tomi Valkeinen
On 20/03/14 19:01, Grant Likely wrote:

> I think depending on a generic graph walk is where I have the biggest
> concern about the design. I don't think it is a good idea for the master
> device to try a generic walk over the graph looking for other devices
> that might be components because it cannot know whether or not further
> links are relevant, or even if other endpoint nodes in the target
> hierarchy actually conform to the graph binding in the same way.
> 
> Consider the example of a system with two video controllers (one
> embedded and one discrete), a display mux, and a panel. The display
> controller depends on the mux, and the mux depends on the panel. It
> would be entirely reasonable to start up the display subsystem with the
> embedded controller without the discrete adapter being available, but
> the way the current graph pattern is proposed there is no dependency
> information between the devices.

For some reason I don't understand this master and dependency way of
thinking. I just can't twist my brain to it, please help me =).

With the setup you describe above, why does the video controller depend
on the mux, and why it is the master? Why the DMA engine does not depend
on the embedded video controller, and why is the DMA engine not the master?

With the setup above, what are we really interested in? It's the
display, right? We want to have the display working, with resolution and
video timings that work for the display. The mux and the display
controllers are just necessary evils to make the display work. The
display depends on the mux to provide it a video stream. The mux depends
on the two video controllers to provide the mux two video streams. The
video controllers depend (possibly) on SoC's DMA, or PCI bus to provide
them video data.

And if we consider the same setup as above, but the mux has two
exclusive outputs, it again works fine with the dependency I described.
If you want to enable panel 1, you'll depend on mux and video
controllers, but not on panel 2. So you can leave the panel 2 driver out
and things still work ok.

But note that I don't think this dependency has strict relation to the
DT graph representation, see below.

> I really do think the dependency direction needs to be explicit so that
> a driver knows whether or not a given link is relevant for it to start,

I think that comes implicitly from the driver, it doesn't need to be
described in the DT. If a device has an input port, and the device is
configured to use that input port, the device depends on whatever is on
the other side of the input port. The device driver must know that.

Somehow a device driver needs to find if the driver behind its input
ports are ready. It could use the links in DT directly, if they are
supplied in that direction, or it could rely on someone else parsing the
DT, and exposing the information via some API.

I think it's simpler for the SW to follow the links directly, but that
would mean having the links in the opposite direction than the data
flow, which feels a bit odd to me.

> and there must be driver know that knows how to interpret the target
> node. A device that is a master needs to know which links are
> dependencies, and which are not.

Well, again, I may not quite understand what the master means here. But
for me, the display is the master of the pipeline. The driver for the
display is the one that has to decide what kind of video signal is
acceptable, how the signal must be enabled, and disabled, etc.

When someone (the master's master =) tells the panel to enable itself,
the panel needs to use an API to configure and enable its input ports.
The devices on the other end of the input ports then configure and
enable their inputs. And so on.

Anyway, I do think this is more of a SW organization topic than how we
should describe the hardware. As I see it, the parent-child
relationships in the DT describe the control paths and the graph
describes the data paths. Having the data paths described in the
direction of data flow (or double-linked in case of bi-dir link) sounds
logical to me, but I think the inverse could work fine too.

But using some kind of CPU centric direction doesn't sound very usable,
it makes no sense for cases with peripheral-to-peripheral links, and the
control bus already describes the CPU centric direction in cases where
there exists a clear CPU-to-peripheral direction.

> I'm not even talking about the bi-directional link issue. This issue
> remains regardless of whether or not bidirectional links are used.
> 
> I would solve it in one of the following two ways:
> 
> 1) Make masters only look at one level of dependency. Make the component
> driver responsible for checking /its/ dependencies. If its dependencies
> aren't yet met, then don't register the component as ready. We could
> probably make the drivers/base/component code allow components to pull
> in additional components as required. This approach shouldn't even
> require a change to the 

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-21 Thread Grant Likely
On Fri, 21 Mar 2014 11:44:24 +0100, Andrzej Hajda  wrote:
> On 03/20/2014 06:23 PM, Grant Likely wrote:
> > On Tue, 11 Mar 2014 14:16:37 +0100, Laurent Pinchart 
> >  wrote:
> >> On Tuesday 11 March 2014 14:59:20 Tomi Valkeinen wrote:
> >>> So depending on the use case, the endpoints would point to opposite
> >>> direction from the encoder's point of view.
> >>>
> >>> And if I gathered Grant's opinion correctly (correct me if I'm wrong),
> >>> he thinks things should be explicit, i.e. the bindings for, say, an
> >>> encoder should state that the encoder's output endpoint _must_ contain a
> >>> remote-endpoint property, whereas the encoder's input endpoint _must
> >>> not_ contain a remote-endpoint property.
> >>
> >> Actually my understand was that DT links would have the same direction as 
> >> the 
> >> data flow. There would be no ambiguity in that case as the direction of 
> >> the 
> >> data flow is known. What happens for bidirectional data flows still need 
> >> to be 
> >> discussed though. And if we want to use the of-graph bindings to describe 
> >> graphs without a data flow, a decision will need to be taken there too.
> > 
> > On further thinking, I would say linkage direction should be in the
> > direction that would be considered the dependency order... I'm going to
> > soften my position though. I think the generic pattern should still
> > recommend unidirection links in direction of device dependency, but
> 
> I am not sure what you mean by 'device dependency' but I am sure it will
> not be difficult to present problematic cases, maybe circular
> dependencies, two-way dependencies, etc.

My understanding has been that the link data would be used determine
when the controller driver can be brought up and active. Certainly both
sides of a link need to be 'live' before the link can be used. The
kernel must have a way to resolve the question of "who starts first?",
whether it be the situation of the consumer starts before the producer,
or the situation of two components need to start before the controller
driver can start. That is the dependency chain I'm talking about.

> The only problem of unidirectional links from programming point of view
> is that destination port/interface should be exposed using some
> framework and driver of source link should grab it using the same
> framework, using port/endpoint node for identification. In case of
> bi-directional links the same process should happen but DT do not
> dictates who should expose and who grabs.
> 
> So from programming point of view it should be easy to handle
> unidirectional links regardless of the direction. So I guess the best
> is to use data flow direction as it seems to be the most natural.

right.

> 
> 
> > I'm okay with allowing the bidirection option if the helper functions
> > are modified to validate the target endpoint. I think it needs to test
> > for the following:
> > - Make sure the endpoint either:
> >   - does not have a backlink, or
> >   - the backlink points back to the origin node
> > - If the target is an endpoint node, then make sure the parent doesn't
> >   have a link of any kind
> > - If the target is a port node, make sure it doesn't have any endpoint
> >   children nodes at all.
> 
> I think link validation can be done at dts compile time.

I'm firm on the opinion that the checking must also happen at runtime.
The biggest part of my objection has been how easy it would be to get a
linkage out of sync, and dtc is not necessarily the last tool to touch
the dtb before the kernel gets booted. I want the kernel to flat out
reject any linkage that is improperly formed.

g.

> 
> Regards
> Andrzej
> 
> > 
> > g.
> > 
> > 
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-21 Thread Andrzej Hajda
On 03/20/2014 06:23 PM, Grant Likely wrote:
> On Tue, 11 Mar 2014 14:16:37 +0100, Laurent Pinchart 
>  wrote:
>> On Tuesday 11 March 2014 14:59:20 Tomi Valkeinen wrote:
>>> So depending on the use case, the endpoints would point to opposite
>>> direction from the encoder's point of view.
>>>
>>> And if I gathered Grant's opinion correctly (correct me if I'm wrong),
>>> he thinks things should be explicit, i.e. the bindings for, say, an
>>> encoder should state that the encoder's output endpoint _must_ contain a
>>> remote-endpoint property, whereas the encoder's input endpoint _must
>>> not_ contain a remote-endpoint property.
>>
>> Actually my understand was that DT links would have the same direction as 
>> the 
>> data flow. There would be no ambiguity in that case as the direction of the 
>> data flow is known. What happens for bidirectional data flows still need to 
>> be 
>> discussed though. And if we want to use the of-graph bindings to describe 
>> graphs without a data flow, a decision will need to be taken there too.
> 
> On further thinking, I would say linkage direction should be in the
> direction that would be considered the dependency order... I'm going to
> soften my position though. I think the generic pattern should still
> recommend unidirection links in direction of device dependency, but

I am not sure what you mean by 'device dependency' but I am sure it will
not be difficult to present problematic cases, maybe circular
dependencies, two-way dependencies, etc.

The only problem of unidirectional links from programming point of view
is that destination port/interface should be exposed using some
framework and driver of source link should grab it using the same
framework, using port/endpoint node for identification. In case of
bi-directional links the same process should happen but DT do not
dictates who should expose and who grabs.

So from programming point of view it should be easy to handle
unidirectional links regardless of the direction. So I guess the best
is to use data flow direction as it seems to be the most natural.


> I'm okay with allowing the bidirection option if the helper functions
> are modified to validate the target endpoint. I think it needs to test
> for the following:
> - Make sure the endpoint either:
>   - does not have a backlink, or
>   - the backlink points back to the origin node
> - If the target is an endpoint node, then make sure the parent doesn't
>   have a link of any kind
> - If the target is a port node, make sure it doesn't have any endpoint
>   children nodes at all.

I think link validation can be done at dts compile time.

Regards
Andrzej

> 
> g.
> 
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-21 Thread Tomi Valkeinen
On 20/03/14 20:49, Laurent Pinchart wrote:

>> The CPU is the _controlling_ component - it's the component that has to
>> configure the peripherals so they all talk to each other in the right
>> way.  Therefore, the view of it needs to be CPU centric.
>>
>> If we were providing a DT description for consumption by some other
>> device in the system, then the view should be as seen from that device
>> instead.
>>
>> Think about this.  Would you describe a system starting at, say, the
>> system keyboard, and branching all the way through just becuase that's
>> how you interact with it, or would you describe it from the CPUs point
>> of view because that's what has to be in control of the system.
> 
> DT has been designed to represent a control-based view of the system. It does 
> so pretty well using its tree-based model. However, it doesn't have a native 
> way to represent a flow-based graph, hence the OF graph solution we're 
> discussing. The whole point of this proposal is to represent the topology of 
> the media device, not how each entity is controlled.

I agree with Laurent here. I think this is an important point to keep in
mind. We already describe the control graph in the DT via the
parent-child relationships. There's no point in describing the same
thing again with the graph links being discussed.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-21 Thread Grant Likely
On Fri, 21 Mar 2014 00:26:12 +0100, Laurent Pinchart 
 wrote:
> On Thursday 20 March 2014 23:12:50 Grant Likely wrote:
> > On Thu, 20 Mar 2014 19:52:53 +0100, Laurent Pinchart wrote:
> > > Then we might not be talking about the same thing. I'm talking about DT
> > > bindings to represent the topology of the device, not how drivers are
> > > wired together.
> > 
> > Possibly. I'm certainly confused now. You brought up the component helpers
> > in drivers/base/component.c, so I thought working out dependencies is part
> > of the purpose of this binding. Everything I've heard so far has given me
> > the impression that the graph binding is tied up with knowing when all of
> > the devices exist.
> 
> The two are related, you're of course right about that.
> 
> We're not really moving forward here. Part of our disagreement comes in my 
> opinion from having different requirements and different views of the 
> problem, 
> caused by experiences with different kind of devices. This is much easier to 
> solve by sitting around the same table than discussing on mailing lists. I 
> would propose a meeting at the ELC but that's still a bit far away and would 
> delay progress by more than one month, which is probably not acceptable.
> 
> I can reply to the e-mail where I've drawn one use case I have to deal with 
> to 
> detail my needs if that can help.
> 
> Alternatively the UK isn't that far away and I could jump in a train if you 
> can provide tea for the discussion :-)

I'm game for that, but it is a long train ride. I'm up in Aberdeen which
is 8 hours from London by train. Also, I'm travelling next week to
California (Collaboration summit), so it will have to be in 2 weeks
time.

Why don't we instead try a Google Hangout or a phone call today.
Anywhere between 11:30 and 14:00 GMT would work for me. I'd offer to
provide the tea, but I haven't quite perfected transporter technology
yet.

g.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-21 Thread Grant Likely
On Fri, 21 Mar 2014 00:26:12 +0100, Laurent Pinchart 
laurent.pinch...@ideasonboard.com wrote:
 On Thursday 20 March 2014 23:12:50 Grant Likely wrote:
  On Thu, 20 Mar 2014 19:52:53 +0100, Laurent Pinchart wrote:
   Then we might not be talking about the same thing. I'm talking about DT
   bindings to represent the topology of the device, not how drivers are
   wired together.
  
  Possibly. I'm certainly confused now. You brought up the component helpers
  in drivers/base/component.c, so I thought working out dependencies is part
  of the purpose of this binding. Everything I've heard so far has given me
  the impression that the graph binding is tied up with knowing when all of
  the devices exist.
 
 The two are related, you're of course right about that.
 
 We're not really moving forward here. Part of our disagreement comes in my 
 opinion from having different requirements and different views of the 
 problem, 
 caused by experiences with different kind of devices. This is much easier to 
 solve by sitting around the same table than discussing on mailing lists. I 
 would propose a meeting at the ELC but that's still a bit far away and would 
 delay progress by more than one month, which is probably not acceptable.
 
 I can reply to the e-mail where I've drawn one use case I have to deal with 
 to 
 detail my needs if that can help.
 
 Alternatively the UK isn't that far away and I could jump in a train if you 
 can provide tea for the discussion :-)

I'm game for that, but it is a long train ride. I'm up in Aberdeen which
is 8 hours from London by train. Also, I'm travelling next week to
California (Collaboration summit), so it will have to be in 2 weeks
time.

Why don't we instead try a Google Hangout or a phone call today.
Anywhere between 11:30 and 14:00 GMT would work for me. I'd offer to
provide the tea, but I haven't quite perfected transporter technology
yet.

g.

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-21 Thread Tomi Valkeinen
On 20/03/14 20:49, Laurent Pinchart wrote:

 The CPU is the _controlling_ component - it's the component that has to
 configure the peripherals so they all talk to each other in the right
 way.  Therefore, the view of it needs to be CPU centric.

 If we were providing a DT description for consumption by some other
 device in the system, then the view should be as seen from that device
 instead.

 Think about this.  Would you describe a system starting at, say, the
 system keyboard, and branching all the way through just becuase that's
 how you interact with it, or would you describe it from the CPUs point
 of view because that's what has to be in control of the system.
 
 DT has been designed to represent a control-based view of the system. It does 
 so pretty well using its tree-based model. However, it doesn't have a native 
 way to represent a flow-based graph, hence the OF graph solution we're 
 discussing. The whole point of this proposal is to represent the topology of 
 the media device, not how each entity is controlled.

I agree with Laurent here. I think this is an important point to keep in
mind. We already describe the control graph in the DT via the
parent-child relationships. There's no point in describing the same
thing again with the graph links being discussed.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-21 Thread Andrzej Hajda
On 03/20/2014 06:23 PM, Grant Likely wrote:
 On Tue, 11 Mar 2014 14:16:37 +0100, Laurent Pinchart 
 laurent.pinch...@ideasonboard.com wrote:
 On Tuesday 11 March 2014 14:59:20 Tomi Valkeinen wrote:
 So depending on the use case, the endpoints would point to opposite
 direction from the encoder's point of view.

 And if I gathered Grant's opinion correctly (correct me if I'm wrong),
 he thinks things should be explicit, i.e. the bindings for, say, an
 encoder should state that the encoder's output endpoint _must_ contain a
 remote-endpoint property, whereas the encoder's input endpoint _must
 not_ contain a remote-endpoint property.

 Actually my understand was that DT links would have the same direction as 
 the 
 data flow. There would be no ambiguity in that case as the direction of the 
 data flow is known. What happens for bidirectional data flows still need to 
 be 
 discussed though. And if we want to use the of-graph bindings to describe 
 graphs without a data flow, a decision will need to be taken there too.
 
 On further thinking, I would say linkage direction should be in the
 direction that would be considered the dependency order... I'm going to
 soften my position though. I think the generic pattern should still
 recommend unidirection links in direction of device dependency, but

I am not sure what you mean by 'device dependency' but I am sure it will
not be difficult to present problematic cases, maybe circular
dependencies, two-way dependencies, etc.

The only problem of unidirectional links from programming point of view
is that destination port/interface should be exposed using some
framework and driver of source link should grab it using the same
framework, using port/endpoint node for identification. In case of
bi-directional links the same process should happen but DT do not
dictates who should expose and who grabs.

So from programming point of view it should be easy to handle
unidirectional links regardless of the direction. So I guess the best
is to use data flow direction as it seems to be the most natural.


 I'm okay with allowing the bidirection option if the helper functions
 are modified to validate the target endpoint. I think it needs to test
 for the following:
 - Make sure the endpoint either:
   - does not have a backlink, or
   - the backlink points back to the origin node
 - If the target is an endpoint node, then make sure the parent doesn't
   have a link of any kind
 - If the target is a port node, make sure it doesn't have any endpoint
   children nodes at all.

I think link validation can be done at dts compile time.

Regards
Andrzej

 
 g.
 
 

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-21 Thread Grant Likely
On Fri, 21 Mar 2014 11:44:24 +0100, Andrzej Hajda a.ha...@samsung.com wrote:
 On 03/20/2014 06:23 PM, Grant Likely wrote:
  On Tue, 11 Mar 2014 14:16:37 +0100, Laurent Pinchart 
  laurent.pinch...@ideasonboard.com wrote:
  On Tuesday 11 March 2014 14:59:20 Tomi Valkeinen wrote:
  So depending on the use case, the endpoints would point to opposite
  direction from the encoder's point of view.
 
  And if I gathered Grant's opinion correctly (correct me if I'm wrong),
  he thinks things should be explicit, i.e. the bindings for, say, an
  encoder should state that the encoder's output endpoint _must_ contain a
  remote-endpoint property, whereas the encoder's input endpoint _must
  not_ contain a remote-endpoint property.
 
  Actually my understand was that DT links would have the same direction as 
  the 
  data flow. There would be no ambiguity in that case as the direction of 
  the 
  data flow is known. What happens for bidirectional data flows still need 
  to be 
  discussed though. And if we want to use the of-graph bindings to describe 
  graphs without a data flow, a decision will need to be taken there too.
  
  On further thinking, I would say linkage direction should be in the
  direction that would be considered the dependency order... I'm going to
  soften my position though. I think the generic pattern should still
  recommend unidirection links in direction of device dependency, but
 
 I am not sure what you mean by 'device dependency' but I am sure it will
 not be difficult to present problematic cases, maybe circular
 dependencies, two-way dependencies, etc.

My understanding has been that the link data would be used determine
when the controller driver can be brought up and active. Certainly both
sides of a link need to be 'live' before the link can be used. The
kernel must have a way to resolve the question of who starts first?,
whether it be the situation of the consumer starts before the producer,
or the situation of two components need to start before the controller
driver can start. That is the dependency chain I'm talking about.

 The only problem of unidirectional links from programming point of view
 is that destination port/interface should be exposed using some
 framework and driver of source link should grab it using the same
 framework, using port/endpoint node for identification. In case of
 bi-directional links the same process should happen but DT do not
 dictates who should expose and who grabs.
 
 So from programming point of view it should be easy to handle
 unidirectional links regardless of the direction. So I guess the best
 is to use data flow direction as it seems to be the most natural.

right.

 
 
  I'm okay with allowing the bidirection option if the helper functions
  are modified to validate the target endpoint. I think it needs to test
  for the following:
  - Make sure the endpoint either:
- does not have a backlink, or
- the backlink points back to the origin node
  - If the target is an endpoint node, then make sure the parent doesn't
have a link of any kind
  - If the target is a port node, make sure it doesn't have any endpoint
children nodes at all.
 
 I think link validation can be done at dts compile time.

I'm firm on the opinion that the checking must also happen at runtime.
The biggest part of my objection has been how easy it would be to get a
linkage out of sync, and dtc is not necessarily the last tool to touch
the dtb before the kernel gets booted. I want the kernel to flat out
reject any linkage that is improperly formed.

g.

 
 Regards
 Andrzej
 
  
  g.
  
  
 

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-21 Thread Tomi Valkeinen
On 20/03/14 19:01, Grant Likely wrote:

 I think depending on a generic graph walk is where I have the biggest
 concern about the design. I don't think it is a good idea for the master
 device to try a generic walk over the graph looking for other devices
 that might be components because it cannot know whether or not further
 links are relevant, or even if other endpoint nodes in the target
 hierarchy actually conform to the graph binding in the same way.
 
 Consider the example of a system with two video controllers (one
 embedded and one discrete), a display mux, and a panel. The display
 controller depends on the mux, and the mux depends on the panel. It
 would be entirely reasonable to start up the display subsystem with the
 embedded controller without the discrete adapter being available, but
 the way the current graph pattern is proposed there is no dependency
 information between the devices.

For some reason I don't understand this master and dependency way of
thinking. I just can't twist my brain to it, please help me =).

With the setup you describe above, why does the video controller depend
on the mux, and why it is the master? Why the DMA engine does not depend
on the embedded video controller, and why is the DMA engine not the master?

With the setup above, what are we really interested in? It's the
display, right? We want to have the display working, with resolution and
video timings that work for the display. The mux and the display
controllers are just necessary evils to make the display work. The
display depends on the mux to provide it a video stream. The mux depends
on the two video controllers to provide the mux two video streams. The
video controllers depend (possibly) on SoC's DMA, or PCI bus to provide
them video data.

And if we consider the same setup as above, but the mux has two
exclusive outputs, it again works fine with the dependency I described.
If you want to enable panel 1, you'll depend on mux and video
controllers, but not on panel 2. So you can leave the panel 2 driver out
and things still work ok.

But note that I don't think this dependency has strict relation to the
DT graph representation, see below.

 I really do think the dependency direction needs to be explicit so that
 a driver knows whether or not a given link is relevant for it to start,

I think that comes implicitly from the driver, it doesn't need to be
described in the DT. If a device has an input port, and the device is
configured to use that input port, the device depends on whatever is on
the other side of the input port. The device driver must know that.

Somehow a device driver needs to find if the driver behind its input
ports are ready. It could use the links in DT directly, if they are
supplied in that direction, or it could rely on someone else parsing the
DT, and exposing the information via some API.

I think it's simpler for the SW to follow the links directly, but that
would mean having the links in the opposite direction than the data
flow, which feels a bit odd to me.

 and there must be driver know that knows how to interpret the target
 node. A device that is a master needs to know which links are
 dependencies, and which are not.

Well, again, I may not quite understand what the master means here. But
for me, the display is the master of the pipeline. The driver for the
display is the one that has to decide what kind of video signal is
acceptable, how the signal must be enabled, and disabled, etc.

When someone (the master's master =) tells the panel to enable itself,
the panel needs to use an API to configure and enable its input ports.
The devices on the other end of the input ports then configure and
enable their inputs. And so on.

Anyway, I do think this is more of a SW organization topic than how we
should describe the hardware. As I see it, the parent-child
relationships in the DT describe the control paths and the graph
describes the data paths. Having the data paths described in the
direction of data flow (or double-linked in case of bi-dir link) sounds
logical to me, but I think the inverse could work fine too.

But using some kind of CPU centric direction doesn't sound very usable,
it makes no sense for cases with peripheral-to-peripheral links, and the
control bus already describes the CPU centric direction in cases where
there exists a clear CPU-to-peripheral direction.

 I'm not even talking about the bi-directional link issue. This issue
 remains regardless of whether or not bidirectional links are used.
 
 I would solve it in one of the following two ways:
 
 1) Make masters only look at one level of dependency. Make the component
 driver responsible for checking /its/ dependencies. If its dependencies
 aren't yet met, then don't register the component as ready. We could
 probably make the drivers/base/component code allow components to pull
 in additional components as required. This approach shouldn't even
 require a change to the binding and eliminates any need for 

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-21 Thread Tomi Valkeinen
On 21/03/14 13:47, Grant Likely wrote:

 I'm firm on the opinion that the checking must also happen at runtime.
 The biggest part of my objection has been how easy it would be to get a
 linkage out of sync, and dtc is not necessarily the last tool to touch
 the dtb before the kernel gets booted. I want the kernel to flat out
 reject any linkage that is improperly formed.

Isn't it trivial to verify it with the current v4l2 bindings? And
endpoint must have a 'remote-endpoint' property, and the endpoint on the
other end must have similar property, pointing in the first endpoint.
Anything else is an error.

I agree that it's easier to write bad links in the dts with
double-linking than with single-linking, but it's still trivial to
verify it in the kernel.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-21 Thread Laurent Pinchart
Hi Grant,

On Friday 21 March 2014 08:15:34 Grant Likely wrote:
 On Fri, 21 Mar 2014 00:26:12 +0100, Laurent Pinchart wrote:
  On Thursday 20 March 2014 23:12:50 Grant Likely wrote:
   On Thu, 20 Mar 2014 19:52:53 +0100, Laurent Pinchart wrote:
Then we might not be talking about the same thing. I'm talking about
DT bindings to represent the topology of the device, not how drivers
are wired together.
   
   Possibly. I'm certainly confused now. You brought up the component
   helpers in drivers/base/component.c, so I thought working out
   dependencies is part of the purpose of this binding. Everything I've
   heard so far has given me the impression that the graph binding is tied
   up with knowing when all of the devices exist.
  
  The two are related, you're of course right about that.
  
  We're not really moving forward here. Part of our disagreement comes in my
  opinion from having different requirements and different views of the
  problem, caused by experiences with different kind of devices. This is
  much easier to solve by sitting around the same table than discussing on
  mailing lists. I would propose a meeting at the ELC but that's still a
  bit far away and would delay progress by more than one month, which is
  probably not acceptable.
  
  I can reply to the e-mail where I've drawn one use case I have to deal
  with to detail my needs if that can help.
  
  Alternatively the UK isn't that far away and I could jump in a train if
  you can provide tea for the discussion :-)
 
 I'm game for that, but it is a long train ride. I'm up in Aberdeen which
 is 8 hours from London by train. Also, I'm travelling next week to
 California (Collaboration summit), so it will have to be in 2 weeks
 time.
 
 Why don't we instead try a Google Hangout or a phone call today.
 Anywhere between 11:30 and 14:00 GMT would work for me. I'd offer to
 provide the tea, but I haven't quite perfected transporter technology
 yet.

That would work for me, but not today I'm afraid. Will you already be in 
California on Monday ?

-- 
Regards,

Laurent Pinchart

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-21 Thread Grant Likely
On Fri, Mar 21, 2014 at 12:44 PM, Laurent Pinchart
laurent.pinch...@ideasonboard.com wrote:
 Hi Grant,

 On Friday 21 March 2014 08:15:34 Grant Likely wrote:
 Why don't we instead try a Google Hangout or a phone call today.
 Anywhere between 11:30 and 14:00 GMT would work for me. I'd offer to
 provide the tea, but I haven't quite perfected transporter technology
 yet.

 That would work for me, but not today I'm afraid. Will you already be in
 California on Monday ?

Yes. Let's try the week after.

g.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-21 Thread Grant Likely
On Fri, 21 Mar 2014 14:16:20 +0200, Tomi Valkeinen tomi.valkei...@ti.com 
wrote:
 On 21/03/14 13:47, Grant Likely wrote:
 
  I'm firm on the opinion that the checking must also happen at runtime.
  The biggest part of my objection has been how easy it would be to get a
  linkage out of sync, and dtc is not necessarily the last tool to touch
  the dtb before the kernel gets booted. I want the kernel to flat out
  reject any linkage that is improperly formed.
 
 Isn't it trivial to verify it with the current v4l2 bindings? And
 endpoint must have a 'remote-endpoint' property, and the endpoint on the
 other end must have similar property, pointing in the first endpoint.
 Anything else is an error.
 
 I agree that it's easier to write bad links in the dts with
 double-linking than with single-linking, but it's still trivial to
 verify it in the kernel.

Right, which is exactly what I'm asking for.

g.

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Laurent Pinchart
Hi Grant,

On Thursday 20 March 2014 23:12:50 Grant Likely wrote:
> On Thu, 20 Mar 2014 19:52:53 +0100, Laurent Pinchart wrote:
> > On Thursday 20 March 2014 18:48:16 Grant Likely wrote:
> > > On Thu, 20 Mar 2014 15:38:04 -0300, Mauro Carvalho Chehab wrote:
> > > > Em Thu, 20 Mar 2014 17:54:31 + Grant Likely escreveu:
> > > > > On Wed, 12 Mar 2014 10:25:56 +, Russell King - ARM Linux wrote:

[snip]

> > > > > > I believe trying to do this according to the flow of data is just
> > > > > > wrong. You should always describe things from the primary device
> > > > > > for the CPU towards the peripheral devices and never the opposite
> > > > > > direction.
> > > > > 
> > > > > Agreed.
> > > > 
> > > > I don't agree, as what's the primary device is relative.
> > > > 
> > > > Actually, in the case of a media data flow, the CPU is generally not
> > > > the primary device.
> > > > 
> > > > Even on general purpose computers, if the full data flow is taken into
> > > > the account, the CPU is a mere device that will just be used to copy
> > > > data either to GPU and speakers or to disk, eventually doing format
> > > > conversions, when the hardware is cheap and don't provide format
> > > > converters.
> > > > 
> > > > On more complex devices, like the ones we want to solve with the
> > > > media controller, like an embedded hardware like a TV or a STB, the
> > > > CPU is just an ancillary component that could even hang without
> > > > stopping TV reception, as the data flow can be fully done inside the
> > > > chipset.
> > > 
> > > We're talking about wiring up device drivers here, not data flow. Yes, I
> > > completely understand that data flow is often not even remotely
> > > cpu-centric. However, device drivers are, and the kernel needs to know
> > > the dependency graph for choosing what devices depend on other devices.
> > 
> > Then we might not be talking about the same thing. I'm talking about DT
> > bindings to represent the topology of the device, not how drivers are
> > wired together.
> 
> Possibly. I'm certainly confused now. You brought up the component helpers
> in drivers/base/component.c, so I thought working out dependencies is part
> of the purpose of this binding. Everything I've heard so far has given me
> the impression that the graph binding is tied up with knowing when all of
> the devices exist.

The two are related, you're of course right about that.

We're not really moving forward here. Part of our disagreement comes in my 
opinion from having different requirements and different views of the problem, 
caused by experiences with different kind of devices. This is much easier to 
solve by sitting around the same table than discussing on mailing lists. I 
would propose a meeting at the ELC but that's still a bit far away and would 
delay progress by more than one month, which is probably not acceptable.

I can reply to the e-mail where I've drawn one use case I have to deal with to 
detail my needs if that can help.

Alternatively the UK isn't that far away and I could jump in a train if you 
can provide tea for the discussion :-)

> How device drivers get connected together may not strictly be a property
> of hardware, but it absolutely is informed by hardware topology.

-- 
Regards,

Laurent Pinchart

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Grant Likely
On Thu, 20 Mar 2014 19:52:53 +0100, Laurent Pinchart 
 wrote:
> On Thursday 20 March 2014 18:48:16 Grant Likely wrote:
> > On Thu, 20 Mar 2014 15:38:04 -0300, Mauro Carvalho Chehab wrote:
> > > Em Thu, 20 Mar 2014 17:54:31 + Grant Likely escreveu:
> > > > On Wed, 12 Mar 2014 10:25:56 +, Russell King - ARM Linux wrote:
> > > > > On Mon, Mar 10, 2014 at 02:52:53PM +0100, Laurent Pinchart wrote:
> > > > > > In theory unidirectional links in DT are indeed enough. However,
> > > > > > let's not forget the following.
> > > > > > 
> > > > > > - There's no such thing as single start points for graphs. Sure, in
> > > > > > some simple cases the graph will have a single start point, but
> > > > > > that's not a generic rule. For instance the camera graphs
> > > > > > http://ideasonboard.org/media/omap3isp.ps and
> > > > > > http://ideasonboard.org/media/eyecam.ps have two camera sensors, and
> > > > > > thus two starting points from a data flow point of view.
> > > > > 
> > > > > I think we need to stop thinking of a graph linked in terms of data
> > > > > flow - that's really not useful.
> > > > > 
> > > > > Consider a display subsystem.  The CRTC is the primary interface for
> > > > > the CPU - this is the "most interesting" interface, it's the interface
> > > > > which provides access to the picture to be displayed for the CPU. 
> > > > > Other interfaces are secondary to that purpose - reading the I2C DDC
> > > > > bus for the display information is all secondary to the primary
> > > > > purpose of displaying a picture.
> > > > > 
> > > > > For a capture subsystem, the primary interface for the CPU is the
> > > > > frame grabber (whether it be an already encoded frame or not.)  The
> > > > > sensor devices are all secondary to that.
> > > > > 
> > > > > So, the primary software interface in each case is where the data for
> > > > > the primary purpose is transferred.  This is the point at which these
> > > > > graphs should commence since this is where we would normally start
> > > > > enumeration of the secondary interfaces.
> > > > > 
> > > > > V4L2 even provides interfaces for this: you open the capture device,
> > > > > which then allows you to enumerate the capture device's inputs, and
> > > > > this in turn allows you to enumerate their properties.  You don't open
> > > > > a particular sensor and work back up the tree.
> > > > > 
> > > > > I believe trying to do this according to the flow of data is just
> > > > > wrong. You should always describe things from the primary device for
> > > > > the CPU towards the peripheral devices and never the opposite
> > > > > direction.
> > > > 
> > > > Agreed.
> > > 
> > > I don't agree, as what's the primary device is relative.
> > > 
> > > Actually, in the case of a media data flow, the CPU is generally not
> > > the primary device.
> > > 
> > > Even on general purpose computers, if the full data flow is taken into
> > > the account, the CPU is a mere device that will just be used to copy
> > > data either to GPU and speakers or to disk, eventually doing format
> > > conversions, when the hardware is cheap and don't provide format
> > > converters.
> > > 
> > > On more complex devices, like the ones we want to solve with the
> > > media controller, like an embedded hardware like a TV or a STB, the CPU
> > > is just an ancillary component that could even hang without stopping
> > > TV reception, as the data flow can be fully done inside the chipset.
> > 
> > We're talking about wiring up device drivers here, not data flow. Yes, I
> > completely understand that data flow is often not even remotely
> > cpu-centric. However, device drivers are, and the kernel needs to know
> > the dependency graph for choosing what devices depend on other devices.
> 
> Then we might not be talking about the same thing. I'm talking about DT 
> bindings to represent the topology of the device, not how drivers are wired 
> together.

Possibly. I'm certainly confused now. You brought up the component
helpers in drivers/base/component.c, so I thought working out
dependencies is part of the purpose of this binding. Everything I've
heard so far has given me the impression that the graph binding is tied
up with knowing when all of the devices exist.

How device drivers get connected together may not strictly be a property
of hardware, but it absolutely is informed by hardware topology.

g.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Laurent Pinchart
Hi Russell,

On Thursday 20 March 2014 18:18:20 Russell King - ARM Linux wrote:
> On Thu, Mar 20, 2014 at 07:16:29PM +0100, Laurent Pinchart wrote:
> > On Thursday 20 March 2014 17:54:31 Grant Likely wrote:
> > > On Wed, 12 Mar 2014 10:25:56 +, Russell King - ARM Linux wrote:
> > > > On Mon, Mar 10, 2014 at 02:52:53PM +0100, Laurent Pinchart wrote:
> > > > > In theory unidirectional links in DT are indeed enough. However,
> > > > > let's not forget the following.
> > > > > 
> > > > > - There's no such thing as single start points for graphs. Sure, in
> > > > > some simple cases the graph will have a single start point, but
> > > > > that's not a generic rule. For instance the camera graphs
> > > > > http://ideasonboard.org/media/omap3isp.ps and
> > > > > http://ideasonboard.org/media/eyecam.ps have two camera sensors, and
> > > > > thus two starting points from a data flow point of view.
> > > > 
> > > > I think we need to stop thinking of a graph linked in terms of data
> > > > flow - that's really not useful.
> > > > 
> > > > Consider a display subsystem.  The CRTC is the primary interface for
> > > > the CPU - this is the "most interesting" interface, it's the interface
> > > > which provides access to the picture to be displayed for the CPU. 
> > > > Other interfaces are secondary to that purpose - reading the I2C DDC
> > > > bus for the display information is all secondary to the primary
> > > > purpose of displaying a picture.
> > > > 
> > > > For a capture subsystem, the primary interface for the CPU is the
> > > > frame grabber (whether it be an already encoded frame or not.)  The
> > > > sensor devices are all secondary to that.
> > > > 
> > > > So, the primary software interface in each case is where the data for
> > > > the primary purpose is transferred.  This is the point at which these
> > > > graphs should commence since this is where we would normally start
> > > > enumeration of the secondary interfaces.
> > > > 
> > > > V4L2 even provides interfaces for this: you open the capture device,
> > > > which then allows you to enumerate the capture device's inputs, and
> > > > this in turn allows you to enumerate their properties.  You don't open
> > > > a particular sensor and work back up the tree.
> > > > 
> > > > I believe trying to do this according to the flow of data is just
> > > > wrong. You should always describe things from the primary device for
> > > > the CPU towards the peripheral devices and never the opposite
> > > > direction.
> > > 
> > > Agreed.
> > 
> > Absolutely not agreed. The whole concept of CPU towards peripherals only
> > makes sense for very simple devices and breaks as soon as the hardware
> > gets more complex. There's no such thing as CPU towards peripherals when
> > peripherals communicate directly.
> > 
> > Please consider use cases more complex than just a display controller and
> > an encoder, and you'll realize how messy not being able to parse the
> > whole graph at once will become. Let's try to improve things, not to make
> > sure to prevent support for future devices.
> 
> That's odd, I did.
> 
> Please draw some (ascii) diagrams of the situations you're saying this
> won't work for, because at the moment all I'm seeing is some vague
> hand-waving rather than anything factual that I can relate to.  Help
> us to actually _see_ the problem you have with this approach so we can
> understand it.

I've broken the diagram into pieces as it won't fit in 80 columns. Ports named 
by a capital letter enclosed in parentheses are connected together. 

Let's start with the on-board input paths, with an HDMI input connector and a 
camera sensor.

 ,---.  ,--.
 | HDMI  |  | ADV7611  |
 | Connector | ---> | HDMI | ---> (A)
 |   |  | Decoder  |
 `---'  `--'

 ,---.
 | AR0331|
 | Camera| ---> (B)
 | Sensor|
 `---'

We then get to the SoC.

  |
  |,---.  ,.
  || BT.656|  | Test   |
(A) -[0]-> | Decoder   | ---> | Pattern| ---> (C)
  ||   |  | Generator  |
  |`---'  `'
  |
  |,---.  ,.  ,.
  || CSI-2 |  ||  | Test   |
(B) -[1]-> | Receiver  | ---> | Processing | ---> | Pattern|  ---> (D)
  ||   |  `   ||  | Generator  |
  |`---'  |   `'  `'
  |   |   ,.
  |   |   ||  ,-.
  | --SOC--   `-> | Statistics | ---> | DMA |
  |   ||  `-'
  |   `'

The blocks named Processing is made of

  ,.  ,.  ,-.
  | Faulty |  | Bayer  |  | White   |
 ---> | Pixels | ---> | to RGB | 

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Laurent Pinchart
On Thursday 20 March 2014 18:48:16 Grant Likely wrote:
> On Thu, 20 Mar 2014 15:38:04 -0300, Mauro Carvalho Chehab wrote:
> > Em Thu, 20 Mar 2014 17:54:31 + Grant Likely escreveu:
> > > On Wed, 12 Mar 2014 10:25:56 +, Russell King - ARM Linux wrote:
> > > > On Mon, Mar 10, 2014 at 02:52:53PM +0100, Laurent Pinchart wrote:
> > > > > In theory unidirectional links in DT are indeed enough. However,
> > > > > let's not forget the following.
> > > > > 
> > > > > - There's no such thing as single start points for graphs. Sure, in
> > > > > some simple cases the graph will have a single start point, but
> > > > > that's not a generic rule. For instance the camera graphs
> > > > > http://ideasonboard.org/media/omap3isp.ps and
> > > > > http://ideasonboard.org/media/eyecam.ps have two camera sensors, and
> > > > > thus two starting points from a data flow point of view.
> > > > 
> > > > I think we need to stop thinking of a graph linked in terms of data
> > > > flow - that's really not useful.
> > > > 
> > > > Consider a display subsystem.  The CRTC is the primary interface for
> > > > the CPU - this is the "most interesting" interface, it's the interface
> > > > which provides access to the picture to be displayed for the CPU. 
> > > > Other interfaces are secondary to that purpose - reading the I2C DDC
> > > > bus for the display information is all secondary to the primary
> > > > purpose of displaying a picture.
> > > > 
> > > > For a capture subsystem, the primary interface for the CPU is the
> > > > frame grabber (whether it be an already encoded frame or not.)  The
> > > > sensor devices are all secondary to that.
> > > > 
> > > > So, the primary software interface in each case is where the data for
> > > > the primary purpose is transferred.  This is the point at which these
> > > > graphs should commence since this is where we would normally start
> > > > enumeration of the secondary interfaces.
> > > > 
> > > > V4L2 even provides interfaces for this: you open the capture device,
> > > > which then allows you to enumerate the capture device's inputs, and
> > > > this in turn allows you to enumerate their properties.  You don't open
> > > > a particular sensor and work back up the tree.
> > > > 
> > > > I believe trying to do this according to the flow of data is just
> > > > wrong. You should always describe things from the primary device for
> > > > the CPU towards the peripheral devices and never the opposite
> > > > direction.
> > > 
> > > Agreed.
> > 
> > I don't agree, as what's the primary device is relative.
> > 
> > Actually, in the case of a media data flow, the CPU is generally not
> > the primary device.
> > 
> > Even on general purpose computers, if the full data flow is taken into
> > the account, the CPU is a mere device that will just be used to copy
> > data either to GPU and speakers or to disk, eventually doing format
> > conversions, when the hardware is cheap and don't provide format
> > converters.
> > 
> > On more complex devices, like the ones we want to solve with the
> > media controller, like an embedded hardware like a TV or a STB, the CPU
> > is just an ancillary component that could even hang without stopping
> > TV reception, as the data flow can be fully done inside the chipset.
> 
> We're talking about wiring up device drivers here, not data flow. Yes, I
> completely understand that data flow is often not even remotely
> cpu-centric. However, device drivers are, and the kernel needs to know
> the dependency graph for choosing what devices depend on other devices.

Then we might not be talking about the same thing. I'm talking about DT 
bindings to represent the topology of the device, not how drivers are wired 
together. The latter isn't a property of the hardware.

-- 
Regards,

Laurent Pinchart

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Laurent Pinchart
Hi Russell,

On Thursday 20 March 2014 18:18:20 Russell King - ARM Linux wrote:
> On Thu, Mar 20, 2014 at 07:16:29PM +0100, Laurent Pinchart wrote:
> > On Thursday 20 March 2014 17:54:31 Grant Likely wrote:
> > > On Wed, 12 Mar 2014 10:25:56 +, Russell King - ARM Linux wrote:

[snip]

> > > > I believe trying to do this according to the flow of data is just
> > > > wrong. You should always describe things from the primary device for
> > > > the CPU towards the peripheral devices and never the opposite
> > > > direction.
> > > 
> > > Agreed.
> > 
> > Absolutely not agreed. The whole concept of CPU towards peripherals only
> > makes sense for very simple devices and breaks as soon as the hardware
> > gets more complex. There's no such thing as CPU towards peripherals when
> > peripherals communicate directly.
> > 
> > Please consider use cases more complex than just a display controller and
> > an encoder, and you'll realize how messy not being able to parse the
> > whole graph at once will become. Let's try to improve things, not to make
> > sure to prevent support for future devices.
> 
> That's odd, I did.
> 
> Please draw some (ascii) diagrams of the situations you're saying this
> won't work for, because at the moment all I'm seeing is some vague
> hand-waving rather than anything factual that I can relate to.  Help
> us to actually _see_ the problem you have with this approach so we can
> understand it.

Working on it. Given my drawing skills this will take a bit of time.

-- 
Regards,

Laurent Pinchart

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Grant Likely
On Thu, 20 Mar 2014 15:38:04 -0300, Mauro Carvalho Chehab 
 wrote:
> Em Thu, 20 Mar 2014 17:54:31 +
> Grant Likely  escreveu:
> 
> > On Wed, 12 Mar 2014 10:25:56 +, Russell King - ARM Linux 
> >  wrote:
> > > On Mon, Mar 10, 2014 at 02:52:53PM +0100, Laurent Pinchart wrote:
> > > > In theory unidirectional links in DT are indeed enough. However, let's 
> > > > not 
> > > > forget the following.
> > > > 
> > > > - There's no such thing as single start points for graphs. Sure, in 
> > > > some 
> > > > simple cases the graph will have a single start point, but that's not a 
> > > > generic rule. For instance the camera graphs 
> > > > http://ideasonboard.org/media/omap3isp.ps and 
> > > > http://ideasonboard.org/media/eyecam.ps have two camera sensors, and 
> > > > thus two 
> > > > starting points from a data flow point of view.
> > > 
> > > I think we need to stop thinking of a graph linked in terms of data
> > > flow - that's really not useful.
> > > 
> > > Consider a display subsystem.  The CRTC is the primary interface for
> > > the CPU - this is the "most interesting" interface, it's the interface
> > > which provides access to the picture to be displayed for the CPU.  Other
> > > interfaces are secondary to that purpose - reading the I2C DDC bus for
> > > the display information is all secondary to the primary purpose of
> > > displaying a picture.
> > > 
> > > For a capture subsystem, the primary interface for the CPU is the frame
> > > grabber (whether it be an already encoded frame or not.)  The sensor
> > > devices are all secondary to that.
> > > 
> > > So, the primary software interface in each case is where the data for
> > > the primary purpose is transferred.  This is the point at which these
> > > graphs should commence since this is where we would normally start
> > > enumeration of the secondary interfaces.
> > > 
> > > V4L2 even provides interfaces for this: you open the capture device,
> > > which then allows you to enumerate the capture device's inputs, and
> > > this in turn allows you to enumerate their properties.  You don't open
> > > a particular sensor and work back up the tree.
> > > 
> > > I believe trying to do this according to the flow of data is just wrong.
> > > You should always describe things from the primary device for the CPU
> > > towards the peripheral devices and never the opposite direction.
> > 
> > Agreed.
> 
> I don't agree, as what's the primary device is relative. 
> 
> Actually, in the case of a media data flow, the CPU is generally not
> the primary device.
> 
> Even on general purpose computers, if the full data flow is taken into
> the account, the CPU is a mere device that will just be used to copy
> data either to GPU and speakers or to disk, eventually doing format
> conversions, when the hardware is cheap and don't provide format
> converters.
> 
> On more complex devices, like the ones we want to solve with the
> media controller, like an embedded hardware like a TV or a STB, the CPU
> is just an ancillary component that could even hang without stopping 
> TV reception, as the data flow can be fully done inside the chipset.

We're talking about wiring up device drivers here, not data flow. Yes, I
completely understand that data flow is often not even remotely
cpu-centric. However, device drivers are, and the kernel needs to know
the dependency graph for choosing what devices depend on other devices.

g.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Laurent Pinchart
On Thursday 20 March 2014 18:43:16 Russell King - ARM Linux wrote:
> On Thu, Mar 20, 2014 at 03:38:04PM -0300, Mauro Carvalho Chehab wrote:
> > Em Thu, 20 Mar 2014 17:54:31 + Grant Likely escreveu:
> > > On Wed, 12 Mar 2014 10:25:56 +, Russell King - ARM Linux wrote:
> > > > On Mon, Mar 10, 2014 at 02:52:53PM +0100, Laurent Pinchart wrote:
> > > > > In theory unidirectional links in DT are indeed enough. However,
> > > > > let's not forget the following.
> > > > > 
> > > > > - There's no such thing as single start points for graphs. Sure, in
> > > > > some simple cases the graph will have a single start point, but
> > > > > that's not a generic rule. For instance the camera graphs
> > > > > http://ideasonboard.org/media/omap3isp.ps and
> > > > > http://ideasonboard.org/media/eyecam.ps have two camera sensors, and
> > > > > thus two starting points from a data flow point of view.
> > > > 
> > > > I think we need to stop thinking of a graph linked in terms of data
> > > > flow - that's really not useful.
> > > > 
> > > > Consider a display subsystem.  The CRTC is the primary interface for
> > > > the CPU - this is the "most interesting" interface, it's the interface
> > > > which provides access to the picture to be displayed for the CPU. 
> > > > Other interfaces are secondary to that purpose - reading the I2C DDC
> > > > bus for the display information is all secondary to the primary
> > > > purpose of displaying a picture.
> > > > 
> > > > For a capture subsystem, the primary interface for the CPU is the
> > > > frame grabber (whether it be an already encoded frame or not.)  The
> > > > sensor devices are all secondary to that.
> > > > 
> > > > So, the primary software interface in each case is where the data for
> > > > the primary purpose is transferred.  This is the point at which these
> > > > graphs should commence since this is where we would normally start
> > > > enumeration of the secondary interfaces.
> > > > 
> > > > V4L2 even provides interfaces for this: you open the capture device,
> > > > which then allows you to enumerate the capture device's inputs, and
> > > > this in turn allows you to enumerate their properties.  You don't open
> > > > a particular sensor and work back up the tree.
> > > > 
> > > > I believe trying to do this according to the flow of data is just
> > > > wrong.
> > > > You should always describe things from the primary device for the CPU
> > > > towards the peripheral devices and never the opposite direction.
> > > 
> > > Agreed.
> > 
> > I don't agree, as what's the primary device is relative.
> > 
> > Actually, in the case of a media data flow, the CPU is generally not
> > the primary device.
> > 
> > Even on general purpose computers, if the full data flow is taken into
> > the account, the CPU is a mere device that will just be used to copy
> > data either to GPU and speakers or to disk, eventually doing format
> > conversions, when the hardware is cheap and don't provide format
> > converters.
> > 
> > On more complex devices, like the ones we want to solve with the
> > media controller, like an embedded hardware like a TV or a STB, the CPU
> > is just an ancillary component that could even hang without stopping
> > TV reception, as the data flow can be fully done inside the chipset.
> 
> The CPU is the _controlling_ component - it's the component that has to
> configure the peripherals so they all talk to each other in the right
> way.  Therefore, the view of it needs to be CPU centric.
> 
> If we were providing a DT description for consumption by some other
> device in the system, then the view should be as seen from that device
> instead.
> 
> Think about this.  Would you describe a system starting at, say, the
> system keyboard, and branching all the way through just becuase that's
> how you interact with it, or would you describe it from the CPUs point
> of view because that's what has to be in control of the system.

DT has been designed to represent a control-based view of the system. It does 
so pretty well using its tree-based model. However, it doesn't have a native 
way to represent a flow-based graph, hence the OF graph solution we're 
discussing. The whole point of this proposal is to represent the topology of 
the media device, not how each entity is controlled.

-- 
Regards,

Laurent Pinchart

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Russell King - ARM Linux
On Thu, Mar 20, 2014 at 03:38:04PM -0300, Mauro Carvalho Chehab wrote:
> Em Thu, 20 Mar 2014 17:54:31 +
> Grant Likely  escreveu:
> 
> > On Wed, 12 Mar 2014 10:25:56 +, Russell King - ARM Linux 
> >  wrote:
> > > On Mon, Mar 10, 2014 at 02:52:53PM +0100, Laurent Pinchart wrote:
> > > > In theory unidirectional links in DT are indeed enough. However, let's 
> > > > not 
> > > > forget the following.
> > > > 
> > > > - There's no such thing as single start points for graphs. Sure, in 
> > > > some 
> > > > simple cases the graph will have a single start point, but that's not a 
> > > > generic rule. For instance the camera graphs 
> > > > http://ideasonboard.org/media/omap3isp.ps and 
> > > > http://ideasonboard.org/media/eyecam.ps have two camera sensors, and 
> > > > thus two 
> > > > starting points from a data flow point of view.
> > > 
> > > I think we need to stop thinking of a graph linked in terms of data
> > > flow - that's really not useful.
> > > 
> > > Consider a display subsystem.  The CRTC is the primary interface for
> > > the CPU - this is the "most interesting" interface, it's the interface
> > > which provides access to the picture to be displayed for the CPU.  Other
> > > interfaces are secondary to that purpose - reading the I2C DDC bus for
> > > the display information is all secondary to the primary purpose of
> > > displaying a picture.
> > > 
> > > For a capture subsystem, the primary interface for the CPU is the frame
> > > grabber (whether it be an already encoded frame or not.)  The sensor
> > > devices are all secondary to that.
> > > 
> > > So, the primary software interface in each case is where the data for
> > > the primary purpose is transferred.  This is the point at which these
> > > graphs should commence since this is where we would normally start
> > > enumeration of the secondary interfaces.
> > > 
> > > V4L2 even provides interfaces for this: you open the capture device,
> > > which then allows you to enumerate the capture device's inputs, and
> > > this in turn allows you to enumerate their properties.  You don't open
> > > a particular sensor and work back up the tree.
> > > 
> > > I believe trying to do this according to the flow of data is just wrong.
> > > You should always describe things from the primary device for the CPU
> > > towards the peripheral devices and never the opposite direction.
> > 
> > Agreed.
> 
> I don't agree, as what's the primary device is relative. 
> 
> Actually, in the case of a media data flow, the CPU is generally not
> the primary device.
> 
> Even on general purpose computers, if the full data flow is taken into
> the account, the CPU is a mere device that will just be used to copy
> data either to GPU and speakers or to disk, eventually doing format
> conversions, when the hardware is cheap and don't provide format
> converters.
> 
> On more complex devices, like the ones we want to solve with the
> media controller, like an embedded hardware like a TV or a STB, the CPU
> is just an ancillary component that could even hang without stopping 
> TV reception, as the data flow can be fully done inside the chipset.

The CPU is the _controlling_ component - it's the component that has to
configure the peripherals so they all talk to each other in the right
way.  Therefore, the view of it needs to be CPU centric.

If we were providing a DT description for consumption by some other
device in the system, then the view should be as seen from that device
instead.

Think about this.  Would you describe a system starting at, say, the
system keyboard, and branching all the way through just becuase that's
how you interact with it, or would you describe it from the CPUs point
of view because that's what has to be in control of the system.

-- 
FTTC broadband for 0.8mile line: now at 9.7Mbps down 460kbps up... slowly
improving, and getting towards what was expected from it.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Mauro Carvalho Chehab
Em Thu, 20 Mar 2014 17:54:31 +
Grant Likely  escreveu:

> On Wed, 12 Mar 2014 10:25:56 +, Russell King - ARM Linux 
>  wrote:
> > On Mon, Mar 10, 2014 at 02:52:53PM +0100, Laurent Pinchart wrote:
> > > In theory unidirectional links in DT are indeed enough. However, let's 
> > > not 
> > > forget the following.
> > > 
> > > - There's no such thing as single start points for graphs. Sure, in some 
> > > simple cases the graph will have a single start point, but that's not a 
> > > generic rule. For instance the camera graphs 
> > > http://ideasonboard.org/media/omap3isp.ps and 
> > > http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus 
> > > two 
> > > starting points from a data flow point of view.
> > 
> > I think we need to stop thinking of a graph linked in terms of data
> > flow - that's really not useful.
> > 
> > Consider a display subsystem.  The CRTC is the primary interface for
> > the CPU - this is the "most interesting" interface, it's the interface
> > which provides access to the picture to be displayed for the CPU.  Other
> > interfaces are secondary to that purpose - reading the I2C DDC bus for
> > the display information is all secondary to the primary purpose of
> > displaying a picture.
> > 
> > For a capture subsystem, the primary interface for the CPU is the frame
> > grabber (whether it be an already encoded frame or not.)  The sensor
> > devices are all secondary to that.
> > 
> > So, the primary software interface in each case is where the data for
> > the primary purpose is transferred.  This is the point at which these
> > graphs should commence since this is where we would normally start
> > enumeration of the secondary interfaces.
> > 
> > V4L2 even provides interfaces for this: you open the capture device,
> > which then allows you to enumerate the capture device's inputs, and
> > this in turn allows you to enumerate their properties.  You don't open
> > a particular sensor and work back up the tree.
> > 
> > I believe trying to do this according to the flow of data is just wrong.
> > You should always describe things from the primary device for the CPU
> > towards the peripheral devices and never the opposite direction.
> 
> Agreed.

I don't agree, as what's the primary device is relative. 

Actually, in the case of a media data flow, the CPU is generally not
the primary device.

Even on general purpose computers, if the full data flow is taken into
the account, the CPU is a mere device that will just be used to copy
data either to GPU and speakers or to disk, eventually doing format
conversions, when the hardware is cheap and don't provide format
converters.

On more complex devices, like the ones we want to solve with the
media controller, like an embedded hardware like a TV or a STB, the CPU
is just an ancillary component that could even hang without stopping 
TV reception, as the data flow can be fully done inside the chipset.

Regards,
Mauro
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Russell King - ARM Linux
On Thu, Mar 20, 2014 at 07:16:29PM +0100, Laurent Pinchart wrote:
> On Thursday 20 March 2014 17:54:31 Grant Likely wrote:
> > On Wed, 12 Mar 2014 10:25:56 +, Russell King - ARM Linux wrote:
> > > On Mon, Mar 10, 2014 at 02:52:53PM +0100, Laurent Pinchart wrote:
> > > > In theory unidirectional links in DT are indeed enough. However, let's
> > > > not forget the following.
> > > > 
> > > > - There's no such thing as single start points for graphs. Sure, in some
> > > > simple cases the graph will have a single start point, but that's not a
> > > > generic rule. For instance the camera graphs
> > > > http://ideasonboard.org/media/omap3isp.ps and
> > > > http://ideasonboard.org/media/eyecam.ps have two camera sensors, and
> > > > thus two starting points from a data flow point of view.
> > > 
> > > I think we need to stop thinking of a graph linked in terms of data
> > > flow - that's really not useful.
> > > 
> > > Consider a display subsystem.  The CRTC is the primary interface for
> > > the CPU - this is the "most interesting" interface, it's the interface
> > > which provides access to the picture to be displayed for the CPU.  Other
> > > interfaces are secondary to that purpose - reading the I2C DDC bus for
> > > the display information is all secondary to the primary purpose of
> > > displaying a picture.
> > > 
> > > For a capture subsystem, the primary interface for the CPU is the frame
> > > grabber (whether it be an already encoded frame or not.)  The sensor
> > > devices are all secondary to that.
> > > 
> > > So, the primary software interface in each case is where the data for
> > > the primary purpose is transferred.  This is the point at which these
> > > graphs should commence since this is where we would normally start
> > > enumeration of the secondary interfaces.
> > > 
> > > V4L2 even provides interfaces for this: you open the capture device,
> > > which then allows you to enumerate the capture device's inputs, and
> > > this in turn allows you to enumerate their properties.  You don't open
> > > a particular sensor and work back up the tree.
> > > 
> > > I believe trying to do this according to the flow of data is just wrong.
> > > You should always describe things from the primary device for the CPU
> > > towards the peripheral devices and never the opposite direction.
> > 
> > Agreed.
> 
> Absolutely not agreed. The whole concept of CPU towards peripherals only 
> makes 
> sense for very simple devices and breaks as soon as the hardware gets more 
> complex. There's no such thing as CPU towards peripherals when peripherals 
> communicate directly.
> 
> Please consider use cases more complex than just a display controller and an 
> encoder, and you'll realize how messy not being able to parse the whole graph 
> at once will become. Let's try to improve things, not to make sure to prevent 
> support for future devices.

That's odd, I did.

Please draw some (ascii) diagrams of the situations you're saying this
won't work for, because at the moment all I'm seeing is some vague
hand-waving rather than anything factual that I can relate to.  Help
us to actually _see_ the problem you have with this approach so we can
understand it.

-- 
FTTC broadband for 0.8mile line: now at 9.7Mbps down 460kbps up... slowly
improving, and getting towards what was expected from it.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Laurent Pinchart
On Thursday 20 March 2014 17:54:31 Grant Likely wrote:
> On Wed, 12 Mar 2014 10:25:56 +, Russell King - ARM Linux wrote:
> > On Mon, Mar 10, 2014 at 02:52:53PM +0100, Laurent Pinchart wrote:
> > > In theory unidirectional links in DT are indeed enough. However, let's
> > > not forget the following.
> > > 
> > > - There's no such thing as single start points for graphs. Sure, in some
> > > simple cases the graph will have a single start point, but that's not a
> > > generic rule. For instance the camera graphs
> > > http://ideasonboard.org/media/omap3isp.ps and
> > > http://ideasonboard.org/media/eyecam.ps have two camera sensors, and
> > > thus two starting points from a data flow point of view.
> > 
> > I think we need to stop thinking of a graph linked in terms of data
> > flow - that's really not useful.
> > 
> > Consider a display subsystem.  The CRTC is the primary interface for
> > the CPU - this is the "most interesting" interface, it's the interface
> > which provides access to the picture to be displayed for the CPU.  Other
> > interfaces are secondary to that purpose - reading the I2C DDC bus for
> > the display information is all secondary to the primary purpose of
> > displaying a picture.
> > 
> > For a capture subsystem, the primary interface for the CPU is the frame
> > grabber (whether it be an already encoded frame or not.)  The sensor
> > devices are all secondary to that.
> > 
> > So, the primary software interface in each case is where the data for
> > the primary purpose is transferred.  This is the point at which these
> > graphs should commence since this is where we would normally start
> > enumeration of the secondary interfaces.
> > 
> > V4L2 even provides interfaces for this: you open the capture device,
> > which then allows you to enumerate the capture device's inputs, and
> > this in turn allows you to enumerate their properties.  You don't open
> > a particular sensor and work back up the tree.
> > 
> > I believe trying to do this according to the flow of data is just wrong.
> > You should always describe things from the primary device for the CPU
> > towards the peripheral devices and never the opposite direction.
> 
> Agreed.

Absolutely not agreed. The whole concept of CPU towards peripherals only makes 
sense for very simple devices and breaks as soon as the hardware gets more 
complex. There's no such thing as CPU towards peripherals when peripherals 
communicate directly.

Please consider use cases more complex than just a display controller and an 
encoder, and you'll realize how messy not being able to parse the whole graph 
at once will become. Let's try to improve things, not to make sure to prevent 
support for future devices.

-- 
Regards,

Laurent Pinchart

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Grant Likely
On Wed, 12 Mar 2014 10:25:56 +, Russell King - ARM Linux 
 wrote:
> On Mon, Mar 10, 2014 at 02:52:53PM +0100, Laurent Pinchart wrote:
> > In theory unidirectional links in DT are indeed enough. However, let's not 
> > forget the following.
> > 
> > - There's no such thing as single start points for graphs. Sure, in some 
> > simple cases the graph will have a single start point, but that's not a 
> > generic rule. For instance the camera graphs 
> > http://ideasonboard.org/media/omap3isp.ps and 
> > http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus 
> > two 
> > starting points from a data flow point of view.
> 
> I think we need to stop thinking of a graph linked in terms of data
> flow - that's really not useful.
> 
> Consider a display subsystem.  The CRTC is the primary interface for
> the CPU - this is the "most interesting" interface, it's the interface
> which provides access to the picture to be displayed for the CPU.  Other
> interfaces are secondary to that purpose - reading the I2C DDC bus for
> the display information is all secondary to the primary purpose of
> displaying a picture.
> 
> For a capture subsystem, the primary interface for the CPU is the frame
> grabber (whether it be an already encoded frame or not.)  The sensor
> devices are all secondary to that.
> 
> So, the primary software interface in each case is where the data for
> the primary purpose is transferred.  This is the point at which these
> graphs should commence since this is where we would normally start
> enumeration of the secondary interfaces.
> 
> V4L2 even provides interfaces for this: you open the capture device,
> which then allows you to enumerate the capture device's inputs, and
> this in turn allows you to enumerate their properties.  You don't open
> a particular sensor and work back up the tree.
> 
> I believe trying to do this according to the flow of data is just wrong.
> You should always describe things from the primary device for the CPU
> towards the peripheral devices and never the opposite direction.

Agreed.

g.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Grant Likely
On Tue, 11 Mar 2014 16:21:49 +0100, Laurent Pinchart 
 wrote:
> Hi Philipp,
> 
> On Tuesday 11 March 2014 16:07:00 Philipp Zabel wrote:
> > Am Montag, den 10.03.2014, 14:58 + schrieb Grant Likely:
> > > On Mon, 10 Mar 2014 14:52:53 +0100, Laurent Pinchart wrote:
> 
> [snip]
> 
> > > > In theory unidirectional links in DT are indeed enough. However, let's
> > > > not forget the following.
> > > > 
> > > > - There's no such thing as single start points for graphs. Sure, in some
> > > > simple cases the graph will have a single start point, but that's not a
> > > > generic rule. For instance the camera graphs
> > > > http://ideasonboard.org/media/omap3isp.ps and
> > > > http://ideasonboard.org/media/eyecam.ps have two camera sensors, and
> > > > thus two starting points from a data flow point of view. And if you
> > > > want a better understanding of how complex media graphs can become,
> > > > have a look at http://ideasonboard.org/media/vsp1.0.pdf (that's a real
> > > > world example, albeit all connections are internal to the SoC in that
> > > > particular case, and don't need to be described in DT).
> > > > 
> > > > - There's also no such thing as a master device that can just point to
> > > > slave devices. Once again simple cases exist where that model could
> > > > work, but real world examples exist of complex pipelines with dozens of
> > > > elements all implemented by a separate IP core and handled by separate
> > > > drivers, forming a graph with long chains and branches. We thus need
> > > > real graph bindings.
> > > > 
> > > > - Finally, having no backlinks in DT would make the software
> > > > implementation very complex. We need to be able to walk the graph in a
> > > > generic way without having any of the IP core drivers loaded, and
> > > > without any specific starting point. We would thus need to parse the
> > > > complete DT tree, looking at all nodes and trying to find out whether
> > > > they're part of the graph we're trying to walk. The complexity of the
> > > > operation would be at best quadratic to the number of nodes in the whole
> > > > DT and to the number of nodes in the graph.
> > > 
> > > Not really. To being with, you cannot determine any meaning of a node
> > > across the tree (aside from it being an endpoint) without also
> > > understanding the binding that the node is a part of. That means you
> > > need to have something matching against the compatible string on both
> > > ends of the linkage. For instance:
> > > 
> > > panel {
> > >   compatible = "acme,lvds-panel";
> > >   lvds-port: port {
> > >   };
> > > };
> > > 
> > > display-controller {
> > >   compatible = "encom,video";
> > >   port {
> > >   remote-endpoint = <>;
> > >   };
> > > };
> > > 
> > > In the above example, the encom,video driver has absolutely zero
> > > information about what the acme,lvds-panel binding actually implements.
> > > There needs to be both a driver for the "acme,lvds-panel" binding and
> > > one for the "encom,video" binding (even if the acme,lvds-panel binding
> > > is very thin and defers the functionality to the video controller).
> > > 
> > > What you want here is the drivers to register each side of the
> > > connection. That could be modeled with something like the following
> > > (pseudocode):
> > > 
> > > struct of_endpoint {
> > > 
> > >   struct list_head list;
> > >   struct device_node *ep_node;
> > >   void *context;
> > >   void (*cb)(struct of_endpoint *ep, void *data);
> > > 
> > > }
> > > 
> > > int of_register_port(struct device *node, void (*cb)(struct of_endpoint
> > > *ep, void *data), void *data) {
> > > 
> > >   struct of_endpoint *ep = kzalloc(sizeof(*ep), GFP_KERNEL);
> > >   
> > >   ep->ep_node = node;
> > >   ep->data = data;
> > >   ep->callback = cb;
> > >   
> > >   /* store the endpoint to a list */
> > >   /* check if the endpoint has a remote-endpoint link */
> > >   /* If so, then link the two together and call the
> > >* callbacks */
> > > }
> > > 
> > > That's neither expensive or complicated.
> > > 
> > > Originally I suggested walking the whole tree multiple times, but as
> > > mentioned that doesn't scale, and as I thought about the above it isn't
> > > even a valid thing to do. Everything has to be driven by drivers, so
> > > even if the backlinks are there, nothing can be done with the link until
> > > the other side goes through enumeration independently.
> > 
> > I have implemented your suggestion as follows. Basically, this allows
> > either endpoint to contain the remote-endpoint link, as long as all
> > drivers register their endpoints in the probe function and return
> > -EPROBE_DEFER from their component framework bind callback until all
> > their endpoints are connected.
> 
> Beside bringing the whole graph down when a single component can't be probed 
> (either because the corresponding hardware devices is missing, broken, or the 
> driver isn't loaded),

I don't think that follows. It creates a list 

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Grant Likely
On Tue, 11 Mar 2014 16:07:00 +0100, Philipp Zabel  
wrote:
> Hi Grant,
> 
> Am Montag, den 10.03.2014, 14:58 + schrieb Grant Likely:
> > On Mon, 10 Mar 2014 14:52:53 +0100, Laurent Pinchart 
> >  wrote:
> > > On Monday 10 March 2014 12:18:20 Tomi Valkeinen wrote:
> > > > On 08/03/14 13:41, Grant Likely wrote:
> > > > >> Ok. If we go for single directional link, the question is then: which
> > > > >> way? And is the direction different for display and camera, which are
> > > > >> kind of reflections of each other?
> > > > > 
> > > > > In general I would recommend choosing whichever device you would
> > > > > sensibly think of as a master. In the camera case I would choose the
> > > > > camera controller node instead of the camera itself, and in the 
> > > > > display
> > > > > case I would choose the display controller instead of the panel. The
> > > > > binding author needs to choose what she things makes the most sense, 
> > > > > but
> > > > > drivers can still use if it it turns out to be 'backwards'
> > > > 
> > > > I would perhaps choose the same approach, but at the same time I think
> > > > it's all but clear. The display controller doesn't control the panel any
> > > > more than a DMA controller controls, say, the display controller.
> > > > 
> > > > In fact, in earlier versions of OMAP DSS DT support I had a simpler port
> > > > description, and in that I had the panel as the master (i.e. link from
> > > > panel to dispc) because the panel driver uses the display controller's
> > > > features to provide the panel device a data stream.
> > > > 
> > > > And even with the current OMAP DSS DT version, which uses the v4l2 style
> > > > ports/endpoints, the driver model is still the same, and only links
> > > > towards upstream are used.
> > > > 
> > > > So one reason I'm happy with the dual-linking is that I can easily
> > > > follow the links from the downstream entities to upstream entities, and
> > > > other people, who have different driver model, can easily do the 
> > > > opposite.
> > > > 
> > > > But I agree that single-linking is enough and this can be handled at
> > > > runtime, even if it makes the code more complex. And perhaps requires
> > > > extra data in the dts, to give the start points for the graph.
> > > 
> > > In theory unidirectional links in DT are indeed enough. However, let's 
> > > not 
> > > forget the following.
> > > 
> > > - There's no such thing as single start points for graphs. Sure, in some 
> > > simple cases the graph will have a single start point, but that's not a 
> > > generic rule. For instance the camera graphs 
> > > http://ideasonboard.org/media/omap3isp.ps and 
> > > http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus 
> > > two 
> > > starting points from a data flow point of view. And if you want a better 
> > > understanding of how complex media graphs can become, have a look at 
> > > http://ideasonboard.org/media/vsp1.0.pdf (that's a real world example, 
> > > albeit 
> > > all connections are internal to the SoC in that particular case, and 
> > > don't 
> > > need to be described in DT).
> > > 
> > > - There's also no such thing as a master device that can just point to 
> > > slave 
> > > devices. Once again simple cases exist where that model could work, but 
> > > real 
> > > world examples exist of complex pipelines with dozens of elements all 
> > > implemented by a separate IP core and handled by separate drivers, 
> > > forming a 
> > > graph with long chains and branches. We thus need real graph bindings.
> > > 
> > > - Finally, having no backlinks in DT would make the software 
> > > implementation 
> > > very complex. We need to be able to walk the graph in a generic way 
> > > without 
> > > having any of the IP core drivers loaded, and without any specific 
> > > starting 
> > > point. We would thus need to parse the complete DT tree, looking at all 
> > > nodes 
> > > and trying to find out whether they're part of the graph we're trying to 
> > > walk. 
> > > The complexity of the operation would be at best quadratic to the number 
> > > of 
> > > nodes in the whole DT and to the number of nodes in the graph.
> > 
> > Not really. To being with, you cannot determine any meaning of a node
> > across the tree (aside from it being an endpoint) without also
> > understanding the binding that the node is a part of. That means you
> > need to have something matching against the compatible string on both
> > ends of the linkage. For instance:
> > 
> > panel {
> > compatible = "acme,lvds-panel";
> > lvds-port: port {
> > };
> > };
> > 
> > display-controller {
> > compatible = "encom,video";
> > port {
> > remote-endpoint = <>;
> > };
> > };
> > 
> > In the above example, the encom,video driver has absolutely zero
> > information about what the acme,lvds-panel binding actually implements.
> > There needs to be both a driver for the "acme,lvds-panel" binding and
> > one for the 

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Grant Likely
On Tue, 11 Mar 2014 14:04:34 +0100, Andrzej Hajda  wrote:
> On 03/10/2014 04:15 PM, Laurent Pinchart wrote:
> > Hi Grant,
> > 
> > On Monday 10 March 2014 14:58:15 Grant Likely wrote:
> >> On Mon, 10 Mar 2014 14:52:53 +0100, Laurent Pinchart wrote:
> >>> On Monday 10 March 2014 12:18:20 Tomi Valkeinen wrote:
>  On 08/03/14 13:41, Grant Likely wrote:
> >> Ok. If we go for single directional link, the question is then: which
> >> way? And is the direction different for display and camera, which are
> >> kind of reflections of each other?
> >
> > In general I would recommend choosing whichever device you would
> > sensibly think of as a master. In the camera case I would choose the
> > camera controller node instead of the camera itself, and in the
> > display case I would choose the display controller instead of the
> > panel. The binding author needs to choose what she things makes the
> > most sense, but drivers can still use if it it turns out to be
> > 'backwards'
> 
>  I would perhaps choose the same approach, but at the same time I think
>  it's all but clear. The display controller doesn't control the panel any
>  more than a DMA controller controls, say, the display controller.
> 
>  In fact, in earlier versions of OMAP DSS DT support I had a simpler port
>  description, and in that I had the panel as the master (i.e. link from
>  panel to dispc) because the panel driver uses the display controller's
>  features to provide the panel device a data stream.
> 
>  And even with the current OMAP DSS DT version, which uses the v4l2 style
>  ports/endpoints, the driver model is still the same, and only links
>  towards upstream are used.
> 
>  So one reason I'm happy with the dual-linking is that I can easily
>  follow the links from the downstream entities to upstream entities, and
>  other people, who have different driver model, can easily do the
>  opposite.
> 
>  But I agree that single-linking is enough and this can be handled at
>  runtime, even if it makes the code more complex. And perhaps requires
>  extra data in the dts, to give the start points for the graph.
> >>>
> >>> In theory unidirectional links in DT are indeed enough. However, let's not
> >>> forget the following.
> >>>
> >>> - There's no such thing as single start points for graphs. Sure, in some
> >>> simple cases the graph will have a single start point, but that's not a
> >>> generic rule. For instance the camera graphs
> >>> http://ideasonboard.org/media/omap3isp.ps and
> >>> http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus
> >>> two starting points from a data flow point of view. And if you want a
> >>> better understanding of how complex media graphs can become, have a look
> >>> at http://ideasonboard.org/media/vsp1.0.pdf (that's a real world example,
> >>> albeit all connections are internal to the SoC in that particular case,
> >>> and don't need to be described in DT).
> >>>
> >>> - There's also no such thing as a master device that can just point to
> >>> slave devices. Once again simple cases exist where that model could work,
> >>> but real world examples exist of complex pipelines with dozens of
> >>> elements all implemented by a separate IP core and handled by separate
> >>> drivers, forming a graph with long chains and branches. We thus need real
> >>> graph bindings.
> >>>
> >>> - Finally, having no backlinks in DT would make the software
> >>> implementation very complex. We need to be able to walk the graph in a
> >>> generic way without having any of the IP core drivers loaded, and without
> >>> any specific starting point. We would thus need to parse the complete DT
> >>> tree, looking at all nodes and trying to find out whether they're part of
> >>> the graph we're trying to walk. The complexity of the operation would be
> >>> at best quadratic to the number of nodes in the whole DT and to the number
> >>> of nodes in the graph.
> >>
> >> Not really. To being with, you cannot determine any meaning of a node
> >> across the tree (aside from it being an endpoint)
> > 
> > That's the important part. I can assume the target node of the 
> > remote-endpoint 
> > phandle to be an endpoint, and can thus assume that it implements the 
> > of-graph 
> > bindings. That's all I need to be able to walk the graph in a generic way.
> > 
> >> without also understanding the binding that the node is a part of. That
> >> means you need to have something matching against the compatible string on
> >> both ends of the linkage. For instance:
> >>
> >> panel {
> >>compatible = "acme,lvds-panel";
> >>lvds-port: port {
> >>};
> >> };
> >>
> >> display-controller {
> >>compatible = "encom,video";
> >>port {
> >>remote-endpoint = <>;
> >>};
> >> };
> >>
> >> In the above example, the encom,video driver has absolutely zero
> >> information about 

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Grant Likely
On Tue, 11 Mar 2014 14:16:37 +0100, Laurent Pinchart 
 wrote:
> On Tuesday 11 March 2014 14:59:20 Tomi Valkeinen wrote:
> > So depending on the use case, the endpoints would point to opposite
> > direction from the encoder's point of view.
> > 
> > And if I gathered Grant's opinion correctly (correct me if I'm wrong),
> > he thinks things should be explicit, i.e. the bindings for, say, an
> > encoder should state that the encoder's output endpoint _must_ contain a
> > remote-endpoint property, whereas the encoder's input endpoint _must
> > not_ contain a remote-endpoint property.
> 
> Actually my understand was that DT links would have the same direction as the 
> data flow. There would be no ambiguity in that case as the direction of the 
> data flow is known. What happens for bidirectional data flows still need to 
> be 
> discussed though. And if we want to use the of-graph bindings to describe 
> graphs without a data flow, a decision will need to be taken there too.

On further thinking, I would say linkage direction should be in the
direction that would be considered the dependency order... I'm going to
soften my position though. I think the generic pattern should still
recommend unidirection links in direction of device dependency, but
I'm okay with allowing the bidirection option if the helper functions
are modified to validate the target endpoint. I think it needs to test
for the following:
- Make sure the endpoint either:
  - does not have a backlink, or
  - the backlink points back to the origin node
- If the target is an endpoint node, then make sure the parent doesn't
  have a link of any kind
- If the target is a port node, make sure it doesn't have any endpoint
  children nodes at all.

g.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Grant Likely
On Tue, 11 Mar 2014 12:43:44 +0100, Laurent Pinchart 
 wrote:
> Hi Philipp,
> 
> On Monday 10 March 2014 16:40:30 Philipp Zabel wrote:
> > Am Montag, den 10.03.2014, 16:15 +0100 schrieb Laurent Pinchart:
> > > On Monday 10 March 2014 14:58:15 Grant Likely wrote:
> > > > On Mon, 10 Mar 2014 14:52:53 +0100, Laurent Pinchart wrote:
> > > > > On Monday 10 March 2014 12:18:20 Tomi Valkeinen wrote:
> > > > > > On 08/03/14 13:41, Grant Likely wrote:
> > > > > > >> Ok. If we go for single directional link, the question is then:
> > > > > > >> which way? And is the direction different for display and camera,
> > > > > > >> which are kind of reflections of each other?
> > > > > > > 
> > > > > > > In general I would recommend choosing whichever device you would
> > > > > > > sensibly think of as a master. In the camera case I would choose
> > > > > > > the camera controller node instead of the camera itself, and in
> > > > > > > the display case I would choose the display controller instead of
> > > > > > > the panel. The binding author needs to choose what she things
> > > > > > > makes the most sense, but drivers can still use if it it turns out
> > > > > > > to be 'backwards'
> > > > > > 
> > > > > > I would perhaps choose the same approach, but at the same time I
> > > > > > think it's all but clear. The display controller doesn't control the
> > > > > > panel any more than a DMA controller controls, say, the display
> > > > > > controller.
> > > > > > 
> > > > > > In fact, in earlier versions of OMAP DSS DT support I had a simpler
> > > > > > port description, and in that I had the panel as the master (i.e.
> > > > > > link from panel to dispc) because the panel driver uses the display
> > > > > > controller's features to provide the panel device a data stream.
> > > > > > 
> > > > > > And even with the current OMAP DSS DT version, which uses the v4l2
> > > > > > style ports/endpoints, the driver model is still the same, and only
> > > > > > links towards upstream are used.
> > > > > > 
> > > > > > So one reason I'm happy with the dual-linking is that I can easily
> > > > > > follow the links from the downstream entities to upstream entities,
> > > > > > and other people, who have different driver model, can easily do the
> > > > > > opposite.
> > > > > > 
> > > > > > But I agree that single-linking is enough and this can be handled at
> > > > > > runtime, even if it makes the code more complex. And perhaps
> > > > > > requires extra data in the dts, to give the start points for the
> > > > > > graph.
> > > > > 
> > > > > In theory unidirectional links in DT are indeed enough. However, let's
> > > > > not forget the following.
> > > > > 
> > > > > - There's no such thing as single start points for graphs. Sure, in
> > > > > some simple cases the graph will have a single start point, but that's
> > > > > not a generic rule. For instance the camera graphs
> > > > > http://ideasonboard.org/media/omap3isp.ps and
> > > > > http://ideasonboard.org/media/eyecam.ps have two camera sensors, and
> > > > > thus two starting points from a data flow point of view. And if you
> > > > > want a better understanding of how complex media graphs can become,
> > > > > have a look at http://ideasonboard.org/media/vsp1.0.pdf (that's a real
> > > > > world example, albeit all connections are internal to the SoC in that
> > > > > particular case, and don't need to be described in DT).
> > > > > 
> > > > > - There's also no such thing as a master device that can just point to
> > > > > slave devices. Once again simple cases exist where that model could
> > > > > work, but real world examples exist of complex pipelines with dozens
> > > > > of elements all implemented by a separate IP core and handled by
> > > > > separate drivers, forming a graph with long chains and branches. We
> > > > > thus need real graph bindings.
> > > > > 
> > > > > - Finally, having no backlinks in DT would make the software
> > > > > implementation very complex. We need to be able to walk the graph in a
> > > > > generic way without having any of the IP core drivers loaded, and
> > > > > without any specific starting point. We would thus need to parse the
> > > > > complete DT tree, looking at all nodes and trying to find out whether
> > > > > they're part of the graph we're trying to walk. The complexity of the
> > > > > operation would be at best quadratic to the number of nodes in the
> > > > > whole DT and to the number of nodes in the graph.
> > > > 
> > > > Not really. To being with, you cannot determine any meaning of a node
> > > > across the tree (aside from it being an endpoint)
> > > 
> > > That's the important part. I can assume the target node of the
> > > remote-endpoint phandle to be an endpoint, and can thus assume that it
> > > implements the of-graph bindings. That's all I need to be able to walk
> > > the graph in a generic way.
> > >
> > > > without also understanding the binding that the node is a part of. That
> > > > means you need to 

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Grant Likely
On Mon, 10 Mar 2014 16:15:37 +0100, Laurent Pinchart 
 wrote:
> Hi Grant,
> 
> On Monday 10 March 2014 14:58:15 Grant Likely wrote:
> > On Mon, 10 Mar 2014 14:52:53 +0100, Laurent Pinchart wrote:
> > > On Monday 10 March 2014 12:18:20 Tomi Valkeinen wrote:
> > > > On 08/03/14 13:41, Grant Likely wrote:
> > > > >> Ok. If we go for single directional link, the question is then: which
> > > > >> way? And is the direction different for display and camera, which are
> > > > >> kind of reflections of each other?
> > > > > 
> > > > > In general I would recommend choosing whichever device you would
> > > > > sensibly think of as a master. In the camera case I would choose the
> > > > > camera controller node instead of the camera itself, and in the
> > > > > display case I would choose the display controller instead of the
> > > > > panel. The binding author needs to choose what she things makes the
> > > > > most sense, but drivers can still use if it it turns out to be
> > > > > 'backwards'
> > > > 
> > > > I would perhaps choose the same approach, but at the same time I think
> > > > it's all but clear. The display controller doesn't control the panel any
> > > > more than a DMA controller controls, say, the display controller.
> > > > 
> > > > In fact, in earlier versions of OMAP DSS DT support I had a simpler port
> > > > description, and in that I had the panel as the master (i.e. link from
> > > > panel to dispc) because the panel driver uses the display controller's
> > > > features to provide the panel device a data stream.
> > > > 
> > > > And even with the current OMAP DSS DT version, which uses the v4l2 style
> > > > ports/endpoints, the driver model is still the same, and only links
> > > > towards upstream are used.
> > > > 
> > > > So one reason I'm happy with the dual-linking is that I can easily
> > > > follow the links from the downstream entities to upstream entities, and
> > > > other people, who have different driver model, can easily do the
> > > > opposite.
> > > > 
> > > > But I agree that single-linking is enough and this can be handled at
> > > > runtime, even if it makes the code more complex. And perhaps requires
> > > > extra data in the dts, to give the start points for the graph.
> > > 
> > > In theory unidirectional links in DT are indeed enough. However, let's not
> > > forget the following.
> > > 
> > > - There's no such thing as single start points for graphs. Sure, in some
> > > simple cases the graph will have a single start point, but that's not a
> > > generic rule. For instance the camera graphs
> > > http://ideasonboard.org/media/omap3isp.ps and
> > > http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus
> > > two starting points from a data flow point of view. And if you want a
> > > better understanding of how complex media graphs can become, have a look
> > > at http://ideasonboard.org/media/vsp1.0.pdf (that's a real world example,
> > > albeit all connections are internal to the SoC in that particular case,
> > > and don't need to be described in DT).
> > > 
> > > - There's also no such thing as a master device that can just point to
> > > slave devices. Once again simple cases exist where that model could work,
> > > but real world examples exist of complex pipelines with dozens of
> > > elements all implemented by a separate IP core and handled by separate
> > > drivers, forming a graph with long chains and branches. We thus need real
> > > graph bindings.
> > > 
> > > - Finally, having no backlinks in DT would make the software
> > > implementation very complex. We need to be able to walk the graph in a
> > > generic way without having any of the IP core drivers loaded, and without
> > > any specific starting point. We would thus need to parse the complete DT
> > > tree, looking at all nodes and trying to find out whether they're part of
> > > the graph we're trying to walk. The complexity of the operation would be
> > > at best quadratic to the number of nodes in the whole DT and to the number
> > > of nodes in the graph.
> > 
> > Not really. To being with, you cannot determine any meaning of a node
> > across the tree (aside from it being an endpoint)
> 
> That's the important part. I can assume the target node of the 
> remote-endpoint 
> phandle to be an endpoint, and can thus assume that it implements the 
> of-graph 
> bindings. That's all I need to be able to walk the graph in a generic way.

Yes, you can assume the target node is an endpoint, but you cannot
assume anything beyond that. It is not a good idea to go looking for
more endpoint links in the local hierarchy that terminates the link.

> 
> > without also understanding the binding that the node is a part of. That
> > means you need to have something matching against the compatible string on
> > both ends of the linkage. For instance:
> > 
> > panel {
> > compatible = "acme,lvds-panel";
> > lvds-port: port {
> > };
> > };
> > 
> > display-controller {
> > compatible 

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Grant Likely
On Mon, 10 Mar 2014 16:15:37 +0100, Laurent Pinchart 
laurent.pinch...@ideasonboard.com wrote:
 Hi Grant,
 
 On Monday 10 March 2014 14:58:15 Grant Likely wrote:
  On Mon, 10 Mar 2014 14:52:53 +0100, Laurent Pinchart wrote:
   On Monday 10 March 2014 12:18:20 Tomi Valkeinen wrote:
On 08/03/14 13:41, Grant Likely wrote:
 Ok. If we go for single directional link, the question is then: which
 way? And is the direction different for display and camera, which are
 kind of reflections of each other?
 
 In general I would recommend choosing whichever device you would
 sensibly think of as a master. In the camera case I would choose the
 camera controller node instead of the camera itself, and in the
 display case I would choose the display controller instead of the
 panel. The binding author needs to choose what she things makes the
 most sense, but drivers can still use if it it turns out to be
 'backwards'

I would perhaps choose the same approach, but at the same time I think
it's all but clear. The display controller doesn't control the panel any
more than a DMA controller controls, say, the display controller.

In fact, in earlier versions of OMAP DSS DT support I had a simpler port
description, and in that I had the panel as the master (i.e. link from
panel to dispc) because the panel driver uses the display controller's
features to provide the panel device a data stream.

And even with the current OMAP DSS DT version, which uses the v4l2 style
ports/endpoints, the driver model is still the same, and only links
towards upstream are used.

So one reason I'm happy with the dual-linking is that I can easily
follow the links from the downstream entities to upstream entities, and
other people, who have different driver model, can easily do the
opposite.

But I agree that single-linking is enough and this can be handled at
runtime, even if it makes the code more complex. And perhaps requires
extra data in the dts, to give the start points for the graph.
   
   In theory unidirectional links in DT are indeed enough. However, let's not
   forget the following.
   
   - There's no such thing as single start points for graphs. Sure, in some
   simple cases the graph will have a single start point, but that's not a
   generic rule. For instance the camera graphs
   http://ideasonboard.org/media/omap3isp.ps and
   http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus
   two starting points from a data flow point of view. And if you want a
   better understanding of how complex media graphs can become, have a look
   at http://ideasonboard.org/media/vsp1.0.pdf (that's a real world example,
   albeit all connections are internal to the SoC in that particular case,
   and don't need to be described in DT).
   
   - There's also no such thing as a master device that can just point to
   slave devices. Once again simple cases exist where that model could work,
   but real world examples exist of complex pipelines with dozens of
   elements all implemented by a separate IP core and handled by separate
   drivers, forming a graph with long chains and branches. We thus need real
   graph bindings.
   
   - Finally, having no backlinks in DT would make the software
   implementation very complex. We need to be able to walk the graph in a
   generic way without having any of the IP core drivers loaded, and without
   any specific starting point. We would thus need to parse the complete DT
   tree, looking at all nodes and trying to find out whether they're part of
   the graph we're trying to walk. The complexity of the operation would be
   at best quadratic to the number of nodes in the whole DT and to the number
   of nodes in the graph.
  
  Not really. To being with, you cannot determine any meaning of a node
  across the tree (aside from it being an endpoint)
 
 That's the important part. I can assume the target node of the 
 remote-endpoint 
 phandle to be an endpoint, and can thus assume that it implements the 
 of-graph 
 bindings. That's all I need to be able to walk the graph in a generic way.

Yes, you can assume the target node is an endpoint, but you cannot
assume anything beyond that. It is not a good idea to go looking for
more endpoint links in the local hierarchy that terminates the link.

 
  without also understanding the binding that the node is a part of. That
  means you need to have something matching against the compatible string on
  both ends of the linkage. For instance:
  
  panel {
  compatible = acme,lvds-panel;
  lvds-port: port {
  };
  };
  
  display-controller {
  compatible = encom,video;
  port {
  remote-endpoint = lvds-port;
  };
  };
  
  In the above example, the encom,video driver has absolutely zero
  information about what the acme,lvds-panel binding actually implements.
  There needs to be both 

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Grant Likely
On Tue, 11 Mar 2014 12:43:44 +0100, Laurent Pinchart 
laurent.pinch...@ideasonboard.com wrote:
 Hi Philipp,
 
 On Monday 10 March 2014 16:40:30 Philipp Zabel wrote:
  Am Montag, den 10.03.2014, 16:15 +0100 schrieb Laurent Pinchart:
   On Monday 10 March 2014 14:58:15 Grant Likely wrote:
On Mon, 10 Mar 2014 14:52:53 +0100, Laurent Pinchart wrote:
 On Monday 10 March 2014 12:18:20 Tomi Valkeinen wrote:
  On 08/03/14 13:41, Grant Likely wrote:
   Ok. If we go for single directional link, the question is then:
   which way? And is the direction different for display and camera,
   which are kind of reflections of each other?
   
   In general I would recommend choosing whichever device you would
   sensibly think of as a master. In the camera case I would choose
   the camera controller node instead of the camera itself, and in
   the display case I would choose the display controller instead of
   the panel. The binding author needs to choose what she things
   makes the most sense, but drivers can still use if it it turns out
   to be 'backwards'
  
  I would perhaps choose the same approach, but at the same time I
  think it's all but clear. The display controller doesn't control the
  panel any more than a DMA controller controls, say, the display
  controller.
  
  In fact, in earlier versions of OMAP DSS DT support I had a simpler
  port description, and in that I had the panel as the master (i.e.
  link from panel to dispc) because the panel driver uses the display
  controller's features to provide the panel device a data stream.
  
  And even with the current OMAP DSS DT version, which uses the v4l2
  style ports/endpoints, the driver model is still the same, and only
  links towards upstream are used.
  
  So one reason I'm happy with the dual-linking is that I can easily
  follow the links from the downstream entities to upstream entities,
  and other people, who have different driver model, can easily do the
  opposite.
  
  But I agree that single-linking is enough and this can be handled at
  runtime, even if it makes the code more complex. And perhaps
  requires extra data in the dts, to give the start points for the
  graph.
 
 In theory unidirectional links in DT are indeed enough. However, let's
 not forget the following.
 
 - There's no such thing as single start points for graphs. Sure, in
 some simple cases the graph will have a single start point, but that's
 not a generic rule. For instance the camera graphs
 http://ideasonboard.org/media/omap3isp.ps and
 http://ideasonboard.org/media/eyecam.ps have two camera sensors, and
 thus two starting points from a data flow point of view. And if you
 want a better understanding of how complex media graphs can become,
 have a look at http://ideasonboard.org/media/vsp1.0.pdf (that's a real
 world example, albeit all connections are internal to the SoC in that
 particular case, and don't need to be described in DT).
 
 - There's also no such thing as a master device that can just point to
 slave devices. Once again simple cases exist where that model could
 work, but real world examples exist of complex pipelines with dozens
 of elements all implemented by a separate IP core and handled by
 separate drivers, forming a graph with long chains and branches. We
 thus need real graph bindings.
 
 - Finally, having no backlinks in DT would make the software
 implementation very complex. We need to be able to walk the graph in a
 generic way without having any of the IP core drivers loaded, and
 without any specific starting point. We would thus need to parse the
 complete DT tree, looking at all nodes and trying to find out whether
 they're part of the graph we're trying to walk. The complexity of the
 operation would be at best quadratic to the number of nodes in the
 whole DT and to the number of nodes in the graph.

Not really. To being with, you cannot determine any meaning of a node
across the tree (aside from it being an endpoint)
   
   That's the important part. I can assume the target node of the
   remote-endpoint phandle to be an endpoint, and can thus assume that it
   implements the of-graph bindings. That's all I need to be able to walk
   the graph in a generic way.
  
without also understanding the binding that the node is a part of. That
means you need to have something matching against the compatible string
on both ends of the linkage. For instance:

panel {
compatible = acme,lvds-panel;
lvds-port: port {
};
};

display-controller {
compatible = encom,video;
port {
remote-endpoint = lvds-port;
};
};

In the above example, 

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Grant Likely
On Tue, 11 Mar 2014 14:16:37 +0100, Laurent Pinchart 
laurent.pinch...@ideasonboard.com wrote:
 On Tuesday 11 March 2014 14:59:20 Tomi Valkeinen wrote:
  So depending on the use case, the endpoints would point to opposite
  direction from the encoder's point of view.
  
  And if I gathered Grant's opinion correctly (correct me if I'm wrong),
  he thinks things should be explicit, i.e. the bindings for, say, an
  encoder should state that the encoder's output endpoint _must_ contain a
  remote-endpoint property, whereas the encoder's input endpoint _must
  not_ contain a remote-endpoint property.
 
 Actually my understand was that DT links would have the same direction as the 
 data flow. There would be no ambiguity in that case as the direction of the 
 data flow is known. What happens for bidirectional data flows still need to 
 be 
 discussed though. And if we want to use the of-graph bindings to describe 
 graphs without a data flow, a decision will need to be taken there too.

On further thinking, I would say linkage direction should be in the
direction that would be considered the dependency order... I'm going to
soften my position though. I think the generic pattern should still
recommend unidirection links in direction of device dependency, but
I'm okay with allowing the bidirection option if the helper functions
are modified to validate the target endpoint. I think it needs to test
for the following:
- Make sure the endpoint either:
  - does not have a backlink, or
  - the backlink points back to the origin node
- If the target is an endpoint node, then make sure the parent doesn't
  have a link of any kind
- If the target is a port node, make sure it doesn't have any endpoint
  children nodes at all.

g.


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Grant Likely
On Tue, 11 Mar 2014 14:04:34 +0100, Andrzej Hajda a.ha...@samsung.com wrote:
 On 03/10/2014 04:15 PM, Laurent Pinchart wrote:
  Hi Grant,
  
  On Monday 10 March 2014 14:58:15 Grant Likely wrote:
  On Mon, 10 Mar 2014 14:52:53 +0100, Laurent Pinchart wrote:
  On Monday 10 March 2014 12:18:20 Tomi Valkeinen wrote:
  On 08/03/14 13:41, Grant Likely wrote:
  Ok. If we go for single directional link, the question is then: which
  way? And is the direction different for display and camera, which are
  kind of reflections of each other?
 
  In general I would recommend choosing whichever device you would
  sensibly think of as a master. In the camera case I would choose the
  camera controller node instead of the camera itself, and in the
  display case I would choose the display controller instead of the
  panel. The binding author needs to choose what she things makes the
  most sense, but drivers can still use if it it turns out to be
  'backwards'
 
  I would perhaps choose the same approach, but at the same time I think
  it's all but clear. The display controller doesn't control the panel any
  more than a DMA controller controls, say, the display controller.
 
  In fact, in earlier versions of OMAP DSS DT support I had a simpler port
  description, and in that I had the panel as the master (i.e. link from
  panel to dispc) because the panel driver uses the display controller's
  features to provide the panel device a data stream.
 
  And even with the current OMAP DSS DT version, which uses the v4l2 style
  ports/endpoints, the driver model is still the same, and only links
  towards upstream are used.
 
  So one reason I'm happy with the dual-linking is that I can easily
  follow the links from the downstream entities to upstream entities, and
  other people, who have different driver model, can easily do the
  opposite.
 
  But I agree that single-linking is enough and this can be handled at
  runtime, even if it makes the code more complex. And perhaps requires
  extra data in the dts, to give the start points for the graph.
 
  In theory unidirectional links in DT are indeed enough. However, let's not
  forget the following.
 
  - There's no such thing as single start points for graphs. Sure, in some
  simple cases the graph will have a single start point, but that's not a
  generic rule. For instance the camera graphs
  http://ideasonboard.org/media/omap3isp.ps and
  http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus
  two starting points from a data flow point of view. And if you want a
  better understanding of how complex media graphs can become, have a look
  at http://ideasonboard.org/media/vsp1.0.pdf (that's a real world example,
  albeit all connections are internal to the SoC in that particular case,
  and don't need to be described in DT).
 
  - There's also no such thing as a master device that can just point to
  slave devices. Once again simple cases exist where that model could work,
  but real world examples exist of complex pipelines with dozens of
  elements all implemented by a separate IP core and handled by separate
  drivers, forming a graph with long chains and branches. We thus need real
  graph bindings.
 
  - Finally, having no backlinks in DT would make the software
  implementation very complex. We need to be able to walk the graph in a
  generic way without having any of the IP core drivers loaded, and without
  any specific starting point. We would thus need to parse the complete DT
  tree, looking at all nodes and trying to find out whether they're part of
  the graph we're trying to walk. The complexity of the operation would be
  at best quadratic to the number of nodes in the whole DT and to the number
  of nodes in the graph.
 
  Not really. To being with, you cannot determine any meaning of a node
  across the tree (aside from it being an endpoint)
  
  That's the important part. I can assume the target node of the 
  remote-endpoint 
  phandle to be an endpoint, and can thus assume that it implements the 
  of-graph 
  bindings. That's all I need to be able to walk the graph in a generic way.
  
  without also understanding the binding that the node is a part of. That
  means you need to have something matching against the compatible string on
  both ends of the linkage. For instance:
 
  panel {
 compatible = acme,lvds-panel;
 lvds-port: port {
 };
  };
 
  display-controller {
 compatible = encom,video;
 port {
 remote-endpoint = lvds-port;
 };
  };
 
  In the above example, the encom,video driver has absolutely zero
  information about what the acme,lvds-panel binding actually implements.
  There needs to be both a driver for the acme,lvds-panel binding and
  one for the encom,video binding (even if the acme,lvds-panel binding
  is very thin and defers the functionality to the video controller).
  
  I absolutely agree with that. We need a driver for each device (in this 
  case 
  the acme panel and the 

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Grant Likely
On Tue, 11 Mar 2014 16:07:00 +0100, Philipp Zabel p.za...@pengutronix.de 
wrote:
 Hi Grant,
 
 Am Montag, den 10.03.2014, 14:58 + schrieb Grant Likely:
  On Mon, 10 Mar 2014 14:52:53 +0100, Laurent Pinchart 
  laurent.pinch...@ideasonboard.com wrote:
   On Monday 10 March 2014 12:18:20 Tomi Valkeinen wrote:
On 08/03/14 13:41, Grant Likely wrote:
 Ok. If we go for single directional link, the question is then: which
 way? And is the direction different for display and camera, which are
 kind of reflections of each other?
 
 In general I would recommend choosing whichever device you would
 sensibly think of as a master. In the camera case I would choose the
 camera controller node instead of the camera itself, and in the 
 display
 case I would choose the display controller instead of the panel. The
 binding author needs to choose what she things makes the most sense, 
 but
 drivers can still use if it it turns out to be 'backwards'

I would perhaps choose the same approach, but at the same time I think
it's all but clear. The display controller doesn't control the panel any
more than a DMA controller controls, say, the display controller.

In fact, in earlier versions of OMAP DSS DT support I had a simpler port
description, and in that I had the panel as the master (i.e. link from
panel to dispc) because the panel driver uses the display controller's
features to provide the panel device a data stream.

And even with the current OMAP DSS DT version, which uses the v4l2 style
ports/endpoints, the driver model is still the same, and only links
towards upstream are used.

So one reason I'm happy with the dual-linking is that I can easily
follow the links from the downstream entities to upstream entities, and
other people, who have different driver model, can easily do the 
opposite.

But I agree that single-linking is enough and this can be handled at
runtime, even if it makes the code more complex. And perhaps requires
extra data in the dts, to give the start points for the graph.
   
   In theory unidirectional links in DT are indeed enough. However, let's 
   not 
   forget the following.
   
   - There's no such thing as single start points for graphs. Sure, in some 
   simple cases the graph will have a single start point, but that's not a 
   generic rule. For instance the camera graphs 
   http://ideasonboard.org/media/omap3isp.ps and 
   http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus 
   two 
   starting points from a data flow point of view. And if you want a better 
   understanding of how complex media graphs can become, have a look at 
   http://ideasonboard.org/media/vsp1.0.pdf (that's a real world example, 
   albeit 
   all connections are internal to the SoC in that particular case, and 
   don't 
   need to be described in DT).
   
   - There's also no such thing as a master device that can just point to 
   slave 
   devices. Once again simple cases exist where that model could work, but 
   real 
   world examples exist of complex pipelines with dozens of elements all 
   implemented by a separate IP core and handled by separate drivers, 
   forming a 
   graph with long chains and branches. We thus need real graph bindings.
   
   - Finally, having no backlinks in DT would make the software 
   implementation 
   very complex. We need to be able to walk the graph in a generic way 
   without 
   having any of the IP core drivers loaded, and without any specific 
   starting 
   point. We would thus need to parse the complete DT tree, looking at all 
   nodes 
   and trying to find out whether they're part of the graph we're trying to 
   walk. 
   The complexity of the operation would be at best quadratic to the number 
   of 
   nodes in the whole DT and to the number of nodes in the graph.
  
  Not really. To being with, you cannot determine any meaning of a node
  across the tree (aside from it being an endpoint) without also
  understanding the binding that the node is a part of. That means you
  need to have something matching against the compatible string on both
  ends of the linkage. For instance:
  
  panel {
  compatible = acme,lvds-panel;
  lvds-port: port {
  };
  };
  
  display-controller {
  compatible = encom,video;
  port {
  remote-endpoint = lvds-port;
  };
  };
  
  In the above example, the encom,video driver has absolutely zero
  information about what the acme,lvds-panel binding actually implements.
  There needs to be both a driver for the acme,lvds-panel binding and
  one for the encom,video binding (even if the acme,lvds-panel binding
  is very thin and defers the functionality to the video controller).
  
  What you want here is the drivers to register each side of the
  connection. That could be modeled with something like the following
  (pseudocode):
  
  

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Grant Likely
On Tue, 11 Mar 2014 16:21:49 +0100, Laurent Pinchart 
laurent.pinch...@ideasonboard.com wrote:
 Hi Philipp,
 
 On Tuesday 11 March 2014 16:07:00 Philipp Zabel wrote:
  Am Montag, den 10.03.2014, 14:58 + schrieb Grant Likely:
   On Mon, 10 Mar 2014 14:52:53 +0100, Laurent Pinchart wrote:
 
 [snip]
 
In theory unidirectional links in DT are indeed enough. However, let's
not forget the following.

- There's no such thing as single start points for graphs. Sure, in some
simple cases the graph will have a single start point, but that's not a
generic rule. For instance the camera graphs
http://ideasonboard.org/media/omap3isp.ps and
http://ideasonboard.org/media/eyecam.ps have two camera sensors, and
thus two starting points from a data flow point of view. And if you
want a better understanding of how complex media graphs can become,
have a look at http://ideasonboard.org/media/vsp1.0.pdf (that's a real
world example, albeit all connections are internal to the SoC in that
particular case, and don't need to be described in DT).

- There's also no such thing as a master device that can just point to
slave devices. Once again simple cases exist where that model could
work, but real world examples exist of complex pipelines with dozens of
elements all implemented by a separate IP core and handled by separate
drivers, forming a graph with long chains and branches. We thus need
real graph bindings.

- Finally, having no backlinks in DT would make the software
implementation very complex. We need to be able to walk the graph in a
generic way without having any of the IP core drivers loaded, and
without any specific starting point. We would thus need to parse the
complete DT tree, looking at all nodes and trying to find out whether
they're part of the graph we're trying to walk. The complexity of the
operation would be at best quadratic to the number of nodes in the whole
DT and to the number of nodes in the graph.
   
   Not really. To being with, you cannot determine any meaning of a node
   across the tree (aside from it being an endpoint) without also
   understanding the binding that the node is a part of. That means you
   need to have something matching against the compatible string on both
   ends of the linkage. For instance:
   
   panel {
 compatible = acme,lvds-panel;
 lvds-port: port {
 };
   };
   
   display-controller {
 compatible = encom,video;
 port {
 remote-endpoint = lvds-port;
 };
   };
   
   In the above example, the encom,video driver has absolutely zero
   information about what the acme,lvds-panel binding actually implements.
   There needs to be both a driver for the acme,lvds-panel binding and
   one for the encom,video binding (even if the acme,lvds-panel binding
   is very thin and defers the functionality to the video controller).
   
   What you want here is the drivers to register each side of the
   connection. That could be modeled with something like the following
   (pseudocode):
   
   struct of_endpoint {
   
 struct list_head list;
 struct device_node *ep_node;
 void *context;
 void (*cb)(struct of_endpoint *ep, void *data);
   
   }
   
   int of_register_port(struct device *node, void (*cb)(struct of_endpoint
   *ep, void *data), void *data) {
   
 struct of_endpoint *ep = kzalloc(sizeof(*ep), GFP_KERNEL);
 
 ep-ep_node = node;
 ep-data = data;
 ep-callback = cb;
 
 /* store the endpoint to a list */
 /* check if the endpoint has a remote-endpoint link */
 /* If so, then link the two together and call the
  * callbacks */
   }
   
   That's neither expensive or complicated.
   
   Originally I suggested walking the whole tree multiple times, but as
   mentioned that doesn't scale, and as I thought about the above it isn't
   even a valid thing to do. Everything has to be driven by drivers, so
   even if the backlinks are there, nothing can be done with the link until
   the other side goes through enumeration independently.
  
  I have implemented your suggestion as follows. Basically, this allows
  either endpoint to contain the remote-endpoint link, as long as all
  drivers register their endpoints in the probe function and return
  -EPROBE_DEFER from their component framework bind callback until all
  their endpoints are connected.
 
 Beside bringing the whole graph down when a single component can't be probed 
 (either because the corresponding hardware devices is missing, broken, or the 
 driver isn't loaded),

I don't think that follows. It creates a list of all port connections as
drivers 'announce' that the device is available. Any master driver can
query the list and determine what links are important. If something is
missing, then with Phillip's approach the master driver can choose to
ignore it.

 that's adding even one more level of 

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Grant Likely
On Wed, 12 Mar 2014 10:25:56 +, Russell King - ARM Linux 
li...@arm.linux.org.uk wrote:
 On Mon, Mar 10, 2014 at 02:52:53PM +0100, Laurent Pinchart wrote:
  In theory unidirectional links in DT are indeed enough. However, let's not 
  forget the following.
  
  - There's no such thing as single start points for graphs. Sure, in some 
  simple cases the graph will have a single start point, but that's not a 
  generic rule. For instance the camera graphs 
  http://ideasonboard.org/media/omap3isp.ps and 
  http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus 
  two 
  starting points from a data flow point of view.
 
 I think we need to stop thinking of a graph linked in terms of data
 flow - that's really not useful.
 
 Consider a display subsystem.  The CRTC is the primary interface for
 the CPU - this is the most interesting interface, it's the interface
 which provides access to the picture to be displayed for the CPU.  Other
 interfaces are secondary to that purpose - reading the I2C DDC bus for
 the display information is all secondary to the primary purpose of
 displaying a picture.
 
 For a capture subsystem, the primary interface for the CPU is the frame
 grabber (whether it be an already encoded frame or not.)  The sensor
 devices are all secondary to that.
 
 So, the primary software interface in each case is where the data for
 the primary purpose is transferred.  This is the point at which these
 graphs should commence since this is where we would normally start
 enumeration of the secondary interfaces.
 
 V4L2 even provides interfaces for this: you open the capture device,
 which then allows you to enumerate the capture device's inputs, and
 this in turn allows you to enumerate their properties.  You don't open
 a particular sensor and work back up the tree.
 
 I believe trying to do this according to the flow of data is just wrong.
 You should always describe things from the primary device for the CPU
 towards the peripheral devices and never the opposite direction.

Agreed.

g.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Laurent Pinchart
On Thursday 20 March 2014 17:54:31 Grant Likely wrote:
 On Wed, 12 Mar 2014 10:25:56 +, Russell King - ARM Linux wrote:
  On Mon, Mar 10, 2014 at 02:52:53PM +0100, Laurent Pinchart wrote:
   In theory unidirectional links in DT are indeed enough. However, let's
   not forget the following.
   
   - There's no such thing as single start points for graphs. Sure, in some
   simple cases the graph will have a single start point, but that's not a
   generic rule. For instance the camera graphs
   http://ideasonboard.org/media/omap3isp.ps and
   http://ideasonboard.org/media/eyecam.ps have two camera sensors, and
   thus two starting points from a data flow point of view.
  
  I think we need to stop thinking of a graph linked in terms of data
  flow - that's really not useful.
  
  Consider a display subsystem.  The CRTC is the primary interface for
  the CPU - this is the most interesting interface, it's the interface
  which provides access to the picture to be displayed for the CPU.  Other
  interfaces are secondary to that purpose - reading the I2C DDC bus for
  the display information is all secondary to the primary purpose of
  displaying a picture.
  
  For a capture subsystem, the primary interface for the CPU is the frame
  grabber (whether it be an already encoded frame or not.)  The sensor
  devices are all secondary to that.
  
  So, the primary software interface in each case is where the data for
  the primary purpose is transferred.  This is the point at which these
  graphs should commence since this is where we would normally start
  enumeration of the secondary interfaces.
  
  V4L2 even provides interfaces for this: you open the capture device,
  which then allows you to enumerate the capture device's inputs, and
  this in turn allows you to enumerate their properties.  You don't open
  a particular sensor and work back up the tree.
  
  I believe trying to do this according to the flow of data is just wrong.
  You should always describe things from the primary device for the CPU
  towards the peripheral devices and never the opposite direction.
 
 Agreed.

Absolutely not agreed. The whole concept of CPU towards peripherals only makes 
sense for very simple devices and breaks as soon as the hardware gets more 
complex. There's no such thing as CPU towards peripherals when peripherals 
communicate directly.

Please consider use cases more complex than just a display controller and an 
encoder, and you'll realize how messy not being able to parse the whole graph 
at once will become. Let's try to improve things, not to make sure to prevent 
support for future devices.

-- 
Regards,

Laurent Pinchart

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Russell King - ARM Linux
On Thu, Mar 20, 2014 at 07:16:29PM +0100, Laurent Pinchart wrote:
 On Thursday 20 March 2014 17:54:31 Grant Likely wrote:
  On Wed, 12 Mar 2014 10:25:56 +, Russell King - ARM Linux wrote:
   On Mon, Mar 10, 2014 at 02:52:53PM +0100, Laurent Pinchart wrote:
In theory unidirectional links in DT are indeed enough. However, let's
not forget the following.

- There's no such thing as single start points for graphs. Sure, in some
simple cases the graph will have a single start point, but that's not a
generic rule. For instance the camera graphs
http://ideasonboard.org/media/omap3isp.ps and
http://ideasonboard.org/media/eyecam.ps have two camera sensors, and
thus two starting points from a data flow point of view.
   
   I think we need to stop thinking of a graph linked in terms of data
   flow - that's really not useful.
   
   Consider a display subsystem.  The CRTC is the primary interface for
   the CPU - this is the most interesting interface, it's the interface
   which provides access to the picture to be displayed for the CPU.  Other
   interfaces are secondary to that purpose - reading the I2C DDC bus for
   the display information is all secondary to the primary purpose of
   displaying a picture.
   
   For a capture subsystem, the primary interface for the CPU is the frame
   grabber (whether it be an already encoded frame or not.)  The sensor
   devices are all secondary to that.
   
   So, the primary software interface in each case is where the data for
   the primary purpose is transferred.  This is the point at which these
   graphs should commence since this is where we would normally start
   enumeration of the secondary interfaces.
   
   V4L2 even provides interfaces for this: you open the capture device,
   which then allows you to enumerate the capture device's inputs, and
   this in turn allows you to enumerate their properties.  You don't open
   a particular sensor and work back up the tree.
   
   I believe trying to do this according to the flow of data is just wrong.
   You should always describe things from the primary device for the CPU
   towards the peripheral devices and never the opposite direction.
  
  Agreed.
 
 Absolutely not agreed. The whole concept of CPU towards peripherals only 
 makes 
 sense for very simple devices and breaks as soon as the hardware gets more 
 complex. There's no such thing as CPU towards peripherals when peripherals 
 communicate directly.
 
 Please consider use cases more complex than just a display controller and an 
 encoder, and you'll realize how messy not being able to parse the whole graph 
 at once will become. Let's try to improve things, not to make sure to prevent 
 support for future devices.

That's odd, I did.

Please draw some (ascii) diagrams of the situations you're saying this
won't work for, because at the moment all I'm seeing is some vague
hand-waving rather than anything factual that I can relate to.  Help
us to actually _see_ the problem you have with this approach so we can
understand it.

-- 
FTTC broadband for 0.8mile line: now at 9.7Mbps down 460kbps up... slowly
improving, and getting towards what was expected from it.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Mauro Carvalho Chehab
Em Thu, 20 Mar 2014 17:54:31 +
Grant Likely grant.lik...@linaro.org escreveu:

 On Wed, 12 Mar 2014 10:25:56 +, Russell King - ARM Linux 
 li...@arm.linux.org.uk wrote:
  On Mon, Mar 10, 2014 at 02:52:53PM +0100, Laurent Pinchart wrote:
   In theory unidirectional links in DT are indeed enough. However, let's 
   not 
   forget the following.
   
   - There's no such thing as single start points for graphs. Sure, in some 
   simple cases the graph will have a single start point, but that's not a 
   generic rule. For instance the camera graphs 
   http://ideasonboard.org/media/omap3isp.ps and 
   http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus 
   two 
   starting points from a data flow point of view.
  
  I think we need to stop thinking of a graph linked in terms of data
  flow - that's really not useful.
  
  Consider a display subsystem.  The CRTC is the primary interface for
  the CPU - this is the most interesting interface, it's the interface
  which provides access to the picture to be displayed for the CPU.  Other
  interfaces are secondary to that purpose - reading the I2C DDC bus for
  the display information is all secondary to the primary purpose of
  displaying a picture.
  
  For a capture subsystem, the primary interface for the CPU is the frame
  grabber (whether it be an already encoded frame or not.)  The sensor
  devices are all secondary to that.
  
  So, the primary software interface in each case is where the data for
  the primary purpose is transferred.  This is the point at which these
  graphs should commence since this is where we would normally start
  enumeration of the secondary interfaces.
  
  V4L2 even provides interfaces for this: you open the capture device,
  which then allows you to enumerate the capture device's inputs, and
  this in turn allows you to enumerate their properties.  You don't open
  a particular sensor and work back up the tree.
  
  I believe trying to do this according to the flow of data is just wrong.
  You should always describe things from the primary device for the CPU
  towards the peripheral devices and never the opposite direction.
 
 Agreed.

I don't agree, as what's the primary device is relative. 

Actually, in the case of a media data flow, the CPU is generally not
the primary device.

Even on general purpose computers, if the full data flow is taken into
the account, the CPU is a mere device that will just be used to copy
data either to GPU and speakers or to disk, eventually doing format
conversions, when the hardware is cheap and don't provide format
converters.

On more complex devices, like the ones we want to solve with the
media controller, like an embedded hardware like a TV or a STB, the CPU
is just an ancillary component that could even hang without stopping 
TV reception, as the data flow can be fully done inside the chipset.

Regards,
Mauro
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Russell King - ARM Linux
On Thu, Mar 20, 2014 at 03:38:04PM -0300, Mauro Carvalho Chehab wrote:
 Em Thu, 20 Mar 2014 17:54:31 +
 Grant Likely grant.lik...@linaro.org escreveu:
 
  On Wed, 12 Mar 2014 10:25:56 +, Russell King - ARM Linux 
  li...@arm.linux.org.uk wrote:
   On Mon, Mar 10, 2014 at 02:52:53PM +0100, Laurent Pinchart wrote:
In theory unidirectional links in DT are indeed enough. However, let's 
not 
forget the following.

- There's no such thing as single start points for graphs. Sure, in 
some 
simple cases the graph will have a single start point, but that's not a 
generic rule. For instance the camera graphs 
http://ideasonboard.org/media/omap3isp.ps and 
http://ideasonboard.org/media/eyecam.ps have two camera sensors, and 
thus two 
starting points from a data flow point of view.
   
   I think we need to stop thinking of a graph linked in terms of data
   flow - that's really not useful.
   
   Consider a display subsystem.  The CRTC is the primary interface for
   the CPU - this is the most interesting interface, it's the interface
   which provides access to the picture to be displayed for the CPU.  Other
   interfaces are secondary to that purpose - reading the I2C DDC bus for
   the display information is all secondary to the primary purpose of
   displaying a picture.
   
   For a capture subsystem, the primary interface for the CPU is the frame
   grabber (whether it be an already encoded frame or not.)  The sensor
   devices are all secondary to that.
   
   So, the primary software interface in each case is where the data for
   the primary purpose is transferred.  This is the point at which these
   graphs should commence since this is where we would normally start
   enumeration of the secondary interfaces.
   
   V4L2 even provides interfaces for this: you open the capture device,
   which then allows you to enumerate the capture device's inputs, and
   this in turn allows you to enumerate their properties.  You don't open
   a particular sensor and work back up the tree.
   
   I believe trying to do this according to the flow of data is just wrong.
   You should always describe things from the primary device for the CPU
   towards the peripheral devices and never the opposite direction.
  
  Agreed.
 
 I don't agree, as what's the primary device is relative. 
 
 Actually, in the case of a media data flow, the CPU is generally not
 the primary device.
 
 Even on general purpose computers, if the full data flow is taken into
 the account, the CPU is a mere device that will just be used to copy
 data either to GPU and speakers or to disk, eventually doing format
 conversions, when the hardware is cheap and don't provide format
 converters.
 
 On more complex devices, like the ones we want to solve with the
 media controller, like an embedded hardware like a TV or a STB, the CPU
 is just an ancillary component that could even hang without stopping 
 TV reception, as the data flow can be fully done inside the chipset.

The CPU is the _controlling_ component - it's the component that has to
configure the peripherals so they all talk to each other in the right
way.  Therefore, the view of it needs to be CPU centric.

If we were providing a DT description for consumption by some other
device in the system, then the view should be as seen from that device
instead.

Think about this.  Would you describe a system starting at, say, the
system keyboard, and branching all the way through just becuase that's
how you interact with it, or would you describe it from the CPUs point
of view because that's what has to be in control of the system.

-- 
FTTC broadband for 0.8mile line: now at 9.7Mbps down 460kbps up... slowly
improving, and getting towards what was expected from it.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Laurent Pinchart
On Thursday 20 March 2014 18:43:16 Russell King - ARM Linux wrote:
 On Thu, Mar 20, 2014 at 03:38:04PM -0300, Mauro Carvalho Chehab wrote:
  Em Thu, 20 Mar 2014 17:54:31 + Grant Likely escreveu:
   On Wed, 12 Mar 2014 10:25:56 +, Russell King - ARM Linux wrote:
On Mon, Mar 10, 2014 at 02:52:53PM +0100, Laurent Pinchart wrote:
 In theory unidirectional links in DT are indeed enough. However,
 let's not forget the following.
 
 - There's no such thing as single start points for graphs. Sure, in
 some simple cases the graph will have a single start point, but
 that's not a generic rule. For instance the camera graphs
 http://ideasonboard.org/media/omap3isp.ps and
 http://ideasonboard.org/media/eyecam.ps have two camera sensors, and
 thus two starting points from a data flow point of view.

I think we need to stop thinking of a graph linked in terms of data
flow - that's really not useful.

Consider a display subsystem.  The CRTC is the primary interface for
the CPU - this is the most interesting interface, it's the interface
which provides access to the picture to be displayed for the CPU. 
Other interfaces are secondary to that purpose - reading the I2C DDC
bus for the display information is all secondary to the primary
purpose of displaying a picture.

For a capture subsystem, the primary interface for the CPU is the
frame grabber (whether it be an already encoded frame or not.)  The
sensor devices are all secondary to that.

So, the primary software interface in each case is where the data for
the primary purpose is transferred.  This is the point at which these
graphs should commence since this is where we would normally start
enumeration of the secondary interfaces.

V4L2 even provides interfaces for this: you open the capture device,
which then allows you to enumerate the capture device's inputs, and
this in turn allows you to enumerate their properties.  You don't open
a particular sensor and work back up the tree.

I believe trying to do this according to the flow of data is just
wrong.
You should always describe things from the primary device for the CPU
towards the peripheral devices and never the opposite direction.
   
   Agreed.
  
  I don't agree, as what's the primary device is relative.
  
  Actually, in the case of a media data flow, the CPU is generally not
  the primary device.
  
  Even on general purpose computers, if the full data flow is taken into
  the account, the CPU is a mere device that will just be used to copy
  data either to GPU and speakers or to disk, eventually doing format
  conversions, when the hardware is cheap and don't provide format
  converters.
  
  On more complex devices, like the ones we want to solve with the
  media controller, like an embedded hardware like a TV or a STB, the CPU
  is just an ancillary component that could even hang without stopping
  TV reception, as the data flow can be fully done inside the chipset.
 
 The CPU is the _controlling_ component - it's the component that has to
 configure the peripherals so they all talk to each other in the right
 way.  Therefore, the view of it needs to be CPU centric.
 
 If we were providing a DT description for consumption by some other
 device in the system, then the view should be as seen from that device
 instead.
 
 Think about this.  Would you describe a system starting at, say, the
 system keyboard, and branching all the way through just becuase that's
 how you interact with it, or would you describe it from the CPUs point
 of view because that's what has to be in control of the system.

DT has been designed to represent a control-based view of the system. It does 
so pretty well using its tree-based model. However, it doesn't have a native 
way to represent a flow-based graph, hence the OF graph solution we're 
discussing. The whole point of this proposal is to represent the topology of 
the media device, not how each entity is controlled.

-- 
Regards,

Laurent Pinchart

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Grant Likely
On Thu, 20 Mar 2014 15:38:04 -0300, Mauro Carvalho Chehab 
m.che...@samsung.com wrote:
 Em Thu, 20 Mar 2014 17:54:31 +
 Grant Likely grant.lik...@linaro.org escreveu:
 
  On Wed, 12 Mar 2014 10:25:56 +, Russell King - ARM Linux 
  li...@arm.linux.org.uk wrote:
   On Mon, Mar 10, 2014 at 02:52:53PM +0100, Laurent Pinchart wrote:
In theory unidirectional links in DT are indeed enough. However, let's 
not 
forget the following.

- There's no such thing as single start points for graphs. Sure, in 
some 
simple cases the graph will have a single start point, but that's not a 
generic rule. For instance the camera graphs 
http://ideasonboard.org/media/omap3isp.ps and 
http://ideasonboard.org/media/eyecam.ps have two camera sensors, and 
thus two 
starting points from a data flow point of view.
   
   I think we need to stop thinking of a graph linked in terms of data
   flow - that's really not useful.
   
   Consider a display subsystem.  The CRTC is the primary interface for
   the CPU - this is the most interesting interface, it's the interface
   which provides access to the picture to be displayed for the CPU.  Other
   interfaces are secondary to that purpose - reading the I2C DDC bus for
   the display information is all secondary to the primary purpose of
   displaying a picture.
   
   For a capture subsystem, the primary interface for the CPU is the frame
   grabber (whether it be an already encoded frame or not.)  The sensor
   devices are all secondary to that.
   
   So, the primary software interface in each case is where the data for
   the primary purpose is transferred.  This is the point at which these
   graphs should commence since this is where we would normally start
   enumeration of the secondary interfaces.
   
   V4L2 even provides interfaces for this: you open the capture device,
   which then allows you to enumerate the capture device's inputs, and
   this in turn allows you to enumerate their properties.  You don't open
   a particular sensor and work back up the tree.
   
   I believe trying to do this according to the flow of data is just wrong.
   You should always describe things from the primary device for the CPU
   towards the peripheral devices and never the opposite direction.
  
  Agreed.
 
 I don't agree, as what's the primary device is relative. 
 
 Actually, in the case of a media data flow, the CPU is generally not
 the primary device.
 
 Even on general purpose computers, if the full data flow is taken into
 the account, the CPU is a mere device that will just be used to copy
 data either to GPU and speakers or to disk, eventually doing format
 conversions, when the hardware is cheap and don't provide format
 converters.
 
 On more complex devices, like the ones we want to solve with the
 media controller, like an embedded hardware like a TV or a STB, the CPU
 is just an ancillary component that could even hang without stopping 
 TV reception, as the data flow can be fully done inside the chipset.

We're talking about wiring up device drivers here, not data flow. Yes, I
completely understand that data flow is often not even remotely
cpu-centric. However, device drivers are, and the kernel needs to know
the dependency graph for choosing what devices depend on other devices.

g.

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Laurent Pinchart
Hi Russell,

On Thursday 20 March 2014 18:18:20 Russell King - ARM Linux wrote:
 On Thu, Mar 20, 2014 at 07:16:29PM +0100, Laurent Pinchart wrote:
  On Thursday 20 March 2014 17:54:31 Grant Likely wrote:
   On Wed, 12 Mar 2014 10:25:56 +, Russell King - ARM Linux wrote:

[snip]

I believe trying to do this according to the flow of data is just
wrong. You should always describe things from the primary device for
the CPU towards the peripheral devices and never the opposite
direction.
   
   Agreed.
  
  Absolutely not agreed. The whole concept of CPU towards peripherals only
  makes sense for very simple devices and breaks as soon as the hardware
  gets more complex. There's no such thing as CPU towards peripherals when
  peripherals communicate directly.
  
  Please consider use cases more complex than just a display controller and
  an encoder, and you'll realize how messy not being able to parse the
  whole graph at once will become. Let's try to improve things, not to make
  sure to prevent support for future devices.
 
 That's odd, I did.
 
 Please draw some (ascii) diagrams of the situations you're saying this
 won't work for, because at the moment all I'm seeing is some vague
 hand-waving rather than anything factual that I can relate to.  Help
 us to actually _see_ the problem you have with this approach so we can
 understand it.

Working on it. Given my drawing skills this will take a bit of time.

-- 
Regards,

Laurent Pinchart

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Laurent Pinchart
On Thursday 20 March 2014 18:48:16 Grant Likely wrote:
 On Thu, 20 Mar 2014 15:38:04 -0300, Mauro Carvalho Chehab wrote:
  Em Thu, 20 Mar 2014 17:54:31 + Grant Likely escreveu:
   On Wed, 12 Mar 2014 10:25:56 +, Russell King - ARM Linux wrote:
On Mon, Mar 10, 2014 at 02:52:53PM +0100, Laurent Pinchart wrote:
 In theory unidirectional links in DT are indeed enough. However,
 let's not forget the following.
 
 - There's no such thing as single start points for graphs. Sure, in
 some simple cases the graph will have a single start point, but
 that's not a generic rule. For instance the camera graphs
 http://ideasonboard.org/media/omap3isp.ps and
 http://ideasonboard.org/media/eyecam.ps have two camera sensors, and
 thus two starting points from a data flow point of view.

I think we need to stop thinking of a graph linked in terms of data
flow - that's really not useful.

Consider a display subsystem.  The CRTC is the primary interface for
the CPU - this is the most interesting interface, it's the interface
which provides access to the picture to be displayed for the CPU. 
Other interfaces are secondary to that purpose - reading the I2C DDC
bus for the display information is all secondary to the primary
purpose of displaying a picture.

For a capture subsystem, the primary interface for the CPU is the
frame grabber (whether it be an already encoded frame or not.)  The
sensor devices are all secondary to that.

So, the primary software interface in each case is where the data for
the primary purpose is transferred.  This is the point at which these
graphs should commence since this is where we would normally start
enumeration of the secondary interfaces.

V4L2 even provides interfaces for this: you open the capture device,
which then allows you to enumerate the capture device's inputs, and
this in turn allows you to enumerate their properties.  You don't open
a particular sensor and work back up the tree.

I believe trying to do this according to the flow of data is just
wrong. You should always describe things from the primary device for
the CPU towards the peripheral devices and never the opposite
direction.
   
   Agreed.
  
  I don't agree, as what's the primary device is relative.
  
  Actually, in the case of a media data flow, the CPU is generally not
  the primary device.
  
  Even on general purpose computers, if the full data flow is taken into
  the account, the CPU is a mere device that will just be used to copy
  data either to GPU and speakers or to disk, eventually doing format
  conversions, when the hardware is cheap and don't provide format
  converters.
  
  On more complex devices, like the ones we want to solve with the
  media controller, like an embedded hardware like a TV or a STB, the CPU
  is just an ancillary component that could even hang without stopping
  TV reception, as the data flow can be fully done inside the chipset.
 
 We're talking about wiring up device drivers here, not data flow. Yes, I
 completely understand that data flow is often not even remotely
 cpu-centric. However, device drivers are, and the kernel needs to know
 the dependency graph for choosing what devices depend on other devices.

Then we might not be talking about the same thing. I'm talking about DT 
bindings to represent the topology of the device, not how drivers are wired 
together. The latter isn't a property of the hardware.

-- 
Regards,

Laurent Pinchart

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Laurent Pinchart
Hi Russell,

On Thursday 20 March 2014 18:18:20 Russell King - ARM Linux wrote:
 On Thu, Mar 20, 2014 at 07:16:29PM +0100, Laurent Pinchart wrote:
  On Thursday 20 March 2014 17:54:31 Grant Likely wrote:
   On Wed, 12 Mar 2014 10:25:56 +, Russell King - ARM Linux wrote:
On Mon, Mar 10, 2014 at 02:52:53PM +0100, Laurent Pinchart wrote:
 In theory unidirectional links in DT are indeed enough. However,
 let's not forget the following.
 
 - There's no such thing as single start points for graphs. Sure, in
 some simple cases the graph will have a single start point, but
 that's not a generic rule. For instance the camera graphs
 http://ideasonboard.org/media/omap3isp.ps and
 http://ideasonboard.org/media/eyecam.ps have two camera sensors, and
 thus two starting points from a data flow point of view.

I think we need to stop thinking of a graph linked in terms of data
flow - that's really not useful.

Consider a display subsystem.  The CRTC is the primary interface for
the CPU - this is the most interesting interface, it's the interface
which provides access to the picture to be displayed for the CPU. 
Other interfaces are secondary to that purpose - reading the I2C DDC
bus for the display information is all secondary to the primary
purpose of displaying a picture.

For a capture subsystem, the primary interface for the CPU is the
frame grabber (whether it be an already encoded frame or not.)  The
sensor devices are all secondary to that.

So, the primary software interface in each case is where the data for
the primary purpose is transferred.  This is the point at which these
graphs should commence since this is where we would normally start
enumeration of the secondary interfaces.

V4L2 even provides interfaces for this: you open the capture device,
which then allows you to enumerate the capture device's inputs, and
this in turn allows you to enumerate their properties.  You don't open
a particular sensor and work back up the tree.

I believe trying to do this according to the flow of data is just
wrong. You should always describe things from the primary device for
the CPU towards the peripheral devices and never the opposite
direction.
   
   Agreed.
  
  Absolutely not agreed. The whole concept of CPU towards peripherals only
  makes sense for very simple devices and breaks as soon as the hardware
  gets more complex. There's no such thing as CPU towards peripherals when
  peripherals communicate directly.
  
  Please consider use cases more complex than just a display controller and
  an encoder, and you'll realize how messy not being able to parse the
  whole graph at once will become. Let's try to improve things, not to make
  sure to prevent support for future devices.
 
 That's odd, I did.
 
 Please draw some (ascii) diagrams of the situations you're saying this
 won't work for, because at the moment all I'm seeing is some vague
 hand-waving rather than anything factual that I can relate to.  Help
 us to actually _see_ the problem you have with this approach so we can
 understand it.

I've broken the diagram into pieces as it won't fit in 80 columns. Ports named 
by a capital letter enclosed in parentheses are connected together. 

Let's start with the on-board input paths, with an HDMI input connector and a 
camera sensor.

 ,---.  ,--.
 | HDMI  |  | ADV7611  |
 | Connector | --- | HDMI | --- (A)
 |   |  | Decoder  |
 `---'  `--'

 ,---.
 | AR0331|
 | Camera| --- (B)
 | Sensor|
 `---'

We then get to the SoC.

  |
  |,---.  ,.
  || BT.656|  | Test   |
(A) -[0]- | Decoder   | --- | Pattern| --- (C)
  ||   |  | Generator  |
  |`---'  `'
  |
  |,---.  ,.  ,.
  || CSI-2 |  ||  | Test   |
(B) -[1]- | Receiver  | --- | Processing | --- | Pattern|  --- (D)
  ||   |  `   ||  | Generator  |
  |`---'  |   `'  `'
  |   |   ,.
  |   |   ||  ,-.
  | --SOC--   `- | Statistics | --- | DMA |
  |   ||  `-'
  |   `'

The blocks named Processing is made of

  ,.  ,.  ,-.
  | Faulty |  | Bayer  |  | White   |
 --- | Pixels | --- | to RGB | --- | Balance | ---.
  | Correction |  | Conversion |  | Correction  ||
  `'  `'  `-'|
 

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Grant Likely
On Thu, 20 Mar 2014 19:52:53 +0100, Laurent Pinchart 
laurent.pinch...@ideasonboard.com wrote:
 On Thursday 20 March 2014 18:48:16 Grant Likely wrote:
  On Thu, 20 Mar 2014 15:38:04 -0300, Mauro Carvalho Chehab wrote:
   Em Thu, 20 Mar 2014 17:54:31 + Grant Likely escreveu:
On Wed, 12 Mar 2014 10:25:56 +, Russell King - ARM Linux wrote:
 On Mon, Mar 10, 2014 at 02:52:53PM +0100, Laurent Pinchart wrote:
  In theory unidirectional links in DT are indeed enough. However,
  let's not forget the following.
  
  - There's no such thing as single start points for graphs. Sure, in
  some simple cases the graph will have a single start point, but
  that's not a generic rule. For instance the camera graphs
  http://ideasonboard.org/media/omap3isp.ps and
  http://ideasonboard.org/media/eyecam.ps have two camera sensors, and
  thus two starting points from a data flow point of view.
 
 I think we need to stop thinking of a graph linked in terms of data
 flow - that's really not useful.
 
 Consider a display subsystem.  The CRTC is the primary interface for
 the CPU - this is the most interesting interface, it's the interface
 which provides access to the picture to be displayed for the CPU. 
 Other interfaces are secondary to that purpose - reading the I2C DDC
 bus for the display information is all secondary to the primary
 purpose of displaying a picture.
 
 For a capture subsystem, the primary interface for the CPU is the
 frame grabber (whether it be an already encoded frame or not.)  The
 sensor devices are all secondary to that.
 
 So, the primary software interface in each case is where the data for
 the primary purpose is transferred.  This is the point at which these
 graphs should commence since this is where we would normally start
 enumeration of the secondary interfaces.
 
 V4L2 even provides interfaces for this: you open the capture device,
 which then allows you to enumerate the capture device's inputs, and
 this in turn allows you to enumerate their properties.  You don't open
 a particular sensor and work back up the tree.
 
 I believe trying to do this according to the flow of data is just
 wrong. You should always describe things from the primary device for
 the CPU towards the peripheral devices and never the opposite
 direction.

Agreed.
   
   I don't agree, as what's the primary device is relative.
   
   Actually, in the case of a media data flow, the CPU is generally not
   the primary device.
   
   Even on general purpose computers, if the full data flow is taken into
   the account, the CPU is a mere device that will just be used to copy
   data either to GPU and speakers or to disk, eventually doing format
   conversions, when the hardware is cheap and don't provide format
   converters.
   
   On more complex devices, like the ones we want to solve with the
   media controller, like an embedded hardware like a TV or a STB, the CPU
   is just an ancillary component that could even hang without stopping
   TV reception, as the data flow can be fully done inside the chipset.
  
  We're talking about wiring up device drivers here, not data flow. Yes, I
  completely understand that data flow is often not even remotely
  cpu-centric. However, device drivers are, and the kernel needs to know
  the dependency graph for choosing what devices depend on other devices.
 
 Then we might not be talking about the same thing. I'm talking about DT 
 bindings to represent the topology of the device, not how drivers are wired 
 together.

Possibly. I'm certainly confused now. You brought up the component
helpers in drivers/base/component.c, so I thought working out
dependencies is part of the purpose of this binding. Everything I've
heard so far has given me the impression that the graph binding is tied
up with knowing when all of the devices exist.

How device drivers get connected together may not strictly be a property
of hardware, but it absolutely is informed by hardware topology.

g.

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-20 Thread Laurent Pinchart
Hi Grant,

On Thursday 20 March 2014 23:12:50 Grant Likely wrote:
 On Thu, 20 Mar 2014 19:52:53 +0100, Laurent Pinchart wrote:
  On Thursday 20 March 2014 18:48:16 Grant Likely wrote:
   On Thu, 20 Mar 2014 15:38:04 -0300, Mauro Carvalho Chehab wrote:
Em Thu, 20 Mar 2014 17:54:31 + Grant Likely escreveu:
 On Wed, 12 Mar 2014 10:25:56 +, Russell King - ARM Linux wrote:

[snip]

  I believe trying to do this according to the flow of data is just
  wrong. You should always describe things from the primary device
  for the CPU towards the peripheral devices and never the opposite
  direction.
 
 Agreed.

I don't agree, as what's the primary device is relative.

Actually, in the case of a media data flow, the CPU is generally not
the primary device.

Even on general purpose computers, if the full data flow is taken into
the account, the CPU is a mere device that will just be used to copy
data either to GPU and speakers or to disk, eventually doing format
conversions, when the hardware is cheap and don't provide format
converters.

On more complex devices, like the ones we want to solve with the
media controller, like an embedded hardware like a TV or a STB, the
CPU is just an ancillary component that could even hang without
stopping TV reception, as the data flow can be fully done inside the
chipset.
   
   We're talking about wiring up device drivers here, not data flow. Yes, I
   completely understand that data flow is often not even remotely
   cpu-centric. However, device drivers are, and the kernel needs to know
   the dependency graph for choosing what devices depend on other devices.
  
  Then we might not be talking about the same thing. I'm talking about DT
  bindings to represent the topology of the device, not how drivers are
  wired together.
 
 Possibly. I'm certainly confused now. You brought up the component helpers
 in drivers/base/component.c, so I thought working out dependencies is part
 of the purpose of this binding. Everything I've heard so far has given me
 the impression that the graph binding is tied up with knowing when all of
 the devices exist.

The two are related, you're of course right about that.

We're not really moving forward here. Part of our disagreement comes in my 
opinion from having different requirements and different views of the problem, 
caused by experiences with different kind of devices. This is much easier to 
solve by sitting around the same table than discussing on mailing lists. I 
would propose a meeting at the ELC but that's still a bit far away and would 
delay progress by more than one month, which is probably not acceptable.

I can reply to the e-mail where I've drawn one use case I have to deal with to 
detail my needs if that can help.

Alternatively the UK isn't that far away and I could jump in a train if you 
can provide tea for the discussion :-)

 How device drivers get connected together may not strictly be a property
 of hardware, but it absolutely is informed by hardware topology.

-- 
Regards,

Laurent Pinchart

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-12 Thread Laurent Pinchart
Hi Russell and Tomi,

On Wednesday 12 March 2014 12:47:09 Tomi Valkeinen wrote:
> On 12/03/14 12:25, Russell King - ARM Linux wrote:
> > On Mon, Mar 10, 2014 at 02:52:53PM +0100, Laurent Pinchart wrote:
> >> In theory unidirectional links in DT are indeed enough. However, let's
> >> not forget the following.
> >> 
> >> - There's no such thing as single start points for graphs. Sure, in some
> >> simple cases the graph will have a single start point, but that's not a
> >> generic rule. For instance the camera graphs
> >> http://ideasonboard.org/media/omap3isp.ps and
> >> http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus
> >> two starting points from a data flow point of view.
> > 
> > I think we need to stop thinking of a graph linked in terms of data
> > flow - that's really not useful.
> > 
> > Consider a display subsystem.  The CRTC is the primary interface for
> > the CPU - this is the "most interesting" interface, it's the interface
> > which provides access to the picture to be displayed for the CPU.  Other
> > interfaces are secondary to that purpose - reading the I2C DDC bus for
> > the display information is all secondary to the primary purpose of
> > displaying a picture.
> > 
> > For a capture subsystem, the primary interface for the CPU is the frame
> > grabber (whether it be an already encoded frame or not.)  The sensor
> > devices are all secondary to that.
> > 
> > So, the primary software interface in each case is where the data for
> > the primary purpose is transferred.  This is the point at which these
> > graphs should commence since this is where we would normally start
> > enumeration of the secondary interfaces.
> > 
> > V4L2 even provides interfaces for this: you open the capture device,
> > which then allows you to enumerate the capture device's inputs, and
> > this in turn allows you to enumerate their properties.  You don't open
> > a particular sensor and work back up the tree.

Please note that this has partly changed a couple of years ago with the 
introduction of the media controller framework. Userspace now opens a logical 
media device that describes the topology of the hardware, and then accesses 
individual components directly, from sensor to DMA engine.

> We do it the other way around in OMAP DSS. It's the displays the user is
> interested in, so we enumerate the displays, and if the user wants to
> enable a display, we then follow the links from the display towards the
> SoC, configuring and enabling the components on the way.

The logical view of a device from a CPU perspective evolves over time, as APIs 
are refactored or created to support new hardware that comes with new 
paradigms and additional complexity. The hardware data flow direction, 
however, doesn't change. Only modeling the data flow direction in DT might be 
tempting but is probably too hasty of a conclusion though : if DT should model 
the hardware, it ends up modeling a logical view of the hardware, and is thus 
not as closed as one might believe.

In the particular case of display devices I believe that using the data flow 
direction for links (assuming we can't use bidirectional links in DT) is a 
good model. It would allow parsing the whole graph at a reasonable cost (still 
higher than with bidirectional links) while making clear how to represent 
links. Let's not forgot that with more complex devices not all components can 
be referenced directly from the CPU-side display controller.

> I don't have a strong opinion on the direction, I think both have their
> pros. In any case, that's more of a driver model thing, and I'm fine
> with linking in the DT outwards from the SoC (presuming the
> double-linking is not ok, which I still like best).
> 
> > I believe trying to do this according to the flow of data is just wrong.
> > You should always describe things from the primary device for the CPU
> > towards the peripheral devices and never the opposite direction.
> 
> In that case there's possibly the issue I mentioned in other email in
> this thread: an encoder can be used in both a display and a capture
> pipeline. Describing the links outwards from CPU means that sometimes
> the encoder's input port is pointed at, and sometimes the encoder's
> output port is pointed at.
> 
> That's possibly ok, but I think Grant was of the opinion that things
> should be explicitly described in the binding documentation: either a
> device's port must contain a 'remote-endpoint' property, or it must not,
> but no "sometimes". But maybe I took his words too literally.
> 
> Then there's also the audio example Philipp mentioned, where there is no
> clear "outward from Soc" direction for the link, as the link was
> bi-directional and between two non-SoC devices.

Even if the link was unidirectional the "outward from SoC" direction isn't 
always defined for links between non-SoC devices.

-- 
Regards,

Laurent Pinchart


signature.asc
Description: This is a digitally signed message part.


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-12 Thread Tomi Valkeinen
On 12/03/14 12:25, Russell King - ARM Linux wrote:
> On Mon, Mar 10, 2014 at 02:52:53PM +0100, Laurent Pinchart wrote:
>> In theory unidirectional links in DT are indeed enough. However, let's not 
>> forget the following.
>>
>> - There's no such thing as single start points for graphs. Sure, in some 
>> simple cases the graph will have a single start point, but that's not a 
>> generic rule. For instance the camera graphs 
>> http://ideasonboard.org/media/omap3isp.ps and 
>> http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus 
>> two 
>> starting points from a data flow point of view.
> 
> I think we need to stop thinking of a graph linked in terms of data
> flow - that's really not useful.
> 
> Consider a display subsystem.  The CRTC is the primary interface for
> the CPU - this is the "most interesting" interface, it's the interface
> which provides access to the picture to be displayed for the CPU.  Other
> interfaces are secondary to that purpose - reading the I2C DDC bus for
> the display information is all secondary to the primary purpose of
> displaying a picture.
> 
> For a capture subsystem, the primary interface for the CPU is the frame
> grabber (whether it be an already encoded frame or not.)  The sensor
> devices are all secondary to that.
> 
> So, the primary software interface in each case is where the data for
> the primary purpose is transferred.  This is the point at which these
> graphs should commence since this is where we would normally start
> enumeration of the secondary interfaces.
> 
> V4L2 even provides interfaces for this: you open the capture device,
> which then allows you to enumerate the capture device's inputs, and
> this in turn allows you to enumerate their properties.  You don't open
> a particular sensor and work back up the tree.

We do it the other way around in OMAP DSS. It's the displays the user is
interested in, so we enumerate the displays, and if the user wants to
enable a display, we then follow the links from the display towards the
SoC, configuring and enabling the components on the way.

I don't have a strong opinion on the direction, I think both have their
pros. In any case, that's more of a driver model thing, and I'm fine
with linking in the DT outwards from the SoC (presuming the
double-linking is not ok, which I still like best).

> I believe trying to do this according to the flow of data is just wrong.
> You should always describe things from the primary device for the CPU
> towards the peripheral devices and never the opposite direction.

In that case there's possibly the issue I mentioned in other email in
this thread: an encoder can be used in both a display and a capture
pipeline. Describing the links outwards from CPU means that sometimes
the encoder's input port is pointed at, and sometimes the encoder's
output port is pointed at.

That's possibly ok, but I think Grant was of the opinion that things
should be explicitly described in the binding documentation: either a
device's port must contain a 'remote-endpoint' property, or it must not,
but no "sometimes". But maybe I took his words too literally.

Then there's also the audio example Philipp mentioned, where there is no
clear "outward from Soc" direction for the link, as the link was
bi-directional and between two non-SoC devices.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-12 Thread Russell King - ARM Linux
On Mon, Mar 10, 2014 at 02:52:53PM +0100, Laurent Pinchart wrote:
> In theory unidirectional links in DT are indeed enough. However, let's not 
> forget the following.
> 
> - There's no such thing as single start points for graphs. Sure, in some 
> simple cases the graph will have a single start point, but that's not a 
> generic rule. For instance the camera graphs 
> http://ideasonboard.org/media/omap3isp.ps and 
> http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus two 
> starting points from a data flow point of view.

I think we need to stop thinking of a graph linked in terms of data
flow - that's really not useful.

Consider a display subsystem.  The CRTC is the primary interface for
the CPU - this is the "most interesting" interface, it's the interface
which provides access to the picture to be displayed for the CPU.  Other
interfaces are secondary to that purpose - reading the I2C DDC bus for
the display information is all secondary to the primary purpose of
displaying a picture.

For a capture subsystem, the primary interface for the CPU is the frame
grabber (whether it be an already encoded frame or not.)  The sensor
devices are all secondary to that.

So, the primary software interface in each case is where the data for
the primary purpose is transferred.  This is the point at which these
graphs should commence since this is where we would normally start
enumeration of the secondary interfaces.

V4L2 even provides interfaces for this: you open the capture device,
which then allows you to enumerate the capture device's inputs, and
this in turn allows you to enumerate their properties.  You don't open
a particular sensor and work back up the tree.

I believe trying to do this according to the flow of data is just wrong.
You should always describe things from the primary device for the CPU
towards the peripheral devices and never the opposite direction.

-- 
FTTC broadband for 0.8mile line: now at 9.7Mbps down 460kbps up... slowly
improving, and getting towards what was expected from it.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-12 Thread Russell King - ARM Linux
On Mon, Mar 10, 2014 at 02:52:53PM +0100, Laurent Pinchart wrote:
 In theory unidirectional links in DT are indeed enough. However, let's not 
 forget the following.
 
 - There's no such thing as single start points for graphs. Sure, in some 
 simple cases the graph will have a single start point, but that's not a 
 generic rule. For instance the camera graphs 
 http://ideasonboard.org/media/omap3isp.ps and 
 http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus two 
 starting points from a data flow point of view.

I think we need to stop thinking of a graph linked in terms of data
flow - that's really not useful.

Consider a display subsystem.  The CRTC is the primary interface for
the CPU - this is the most interesting interface, it's the interface
which provides access to the picture to be displayed for the CPU.  Other
interfaces are secondary to that purpose - reading the I2C DDC bus for
the display information is all secondary to the primary purpose of
displaying a picture.

For a capture subsystem, the primary interface for the CPU is the frame
grabber (whether it be an already encoded frame or not.)  The sensor
devices are all secondary to that.

So, the primary software interface in each case is where the data for
the primary purpose is transferred.  This is the point at which these
graphs should commence since this is where we would normally start
enumeration of the secondary interfaces.

V4L2 even provides interfaces for this: you open the capture device,
which then allows you to enumerate the capture device's inputs, and
this in turn allows you to enumerate their properties.  You don't open
a particular sensor and work back up the tree.

I believe trying to do this according to the flow of data is just wrong.
You should always describe things from the primary device for the CPU
towards the peripheral devices and never the opposite direction.

-- 
FTTC broadband for 0.8mile line: now at 9.7Mbps down 460kbps up... slowly
improving, and getting towards what was expected from it.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-12 Thread Tomi Valkeinen
On 12/03/14 12:25, Russell King - ARM Linux wrote:
 On Mon, Mar 10, 2014 at 02:52:53PM +0100, Laurent Pinchart wrote:
 In theory unidirectional links in DT are indeed enough. However, let's not 
 forget the following.

 - There's no such thing as single start points for graphs. Sure, in some 
 simple cases the graph will have a single start point, but that's not a 
 generic rule. For instance the camera graphs 
 http://ideasonboard.org/media/omap3isp.ps and 
 http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus 
 two 
 starting points from a data flow point of view.
 
 I think we need to stop thinking of a graph linked in terms of data
 flow - that's really not useful.
 
 Consider a display subsystem.  The CRTC is the primary interface for
 the CPU - this is the most interesting interface, it's the interface
 which provides access to the picture to be displayed for the CPU.  Other
 interfaces are secondary to that purpose - reading the I2C DDC bus for
 the display information is all secondary to the primary purpose of
 displaying a picture.
 
 For a capture subsystem, the primary interface for the CPU is the frame
 grabber (whether it be an already encoded frame or not.)  The sensor
 devices are all secondary to that.
 
 So, the primary software interface in each case is where the data for
 the primary purpose is transferred.  This is the point at which these
 graphs should commence since this is where we would normally start
 enumeration of the secondary interfaces.
 
 V4L2 even provides interfaces for this: you open the capture device,
 which then allows you to enumerate the capture device's inputs, and
 this in turn allows you to enumerate their properties.  You don't open
 a particular sensor and work back up the tree.

We do it the other way around in OMAP DSS. It's the displays the user is
interested in, so we enumerate the displays, and if the user wants to
enable a display, we then follow the links from the display towards the
SoC, configuring and enabling the components on the way.

I don't have a strong opinion on the direction, I think both have their
pros. In any case, that's more of a driver model thing, and I'm fine
with linking in the DT outwards from the SoC (presuming the
double-linking is not ok, which I still like best).

 I believe trying to do this according to the flow of data is just wrong.
 You should always describe things from the primary device for the CPU
 towards the peripheral devices and never the opposite direction.

In that case there's possibly the issue I mentioned in other email in
this thread: an encoder can be used in both a display and a capture
pipeline. Describing the links outwards from CPU means that sometimes
the encoder's input port is pointed at, and sometimes the encoder's
output port is pointed at.

That's possibly ok, but I think Grant was of the opinion that things
should be explicitly described in the binding documentation: either a
device's port must contain a 'remote-endpoint' property, or it must not,
but no sometimes. But maybe I took his words too literally.

Then there's also the audio example Philipp mentioned, where there is no
clear outward from Soc direction for the link, as the link was
bi-directional and between two non-SoC devices.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-12 Thread Laurent Pinchart
Hi Russell and Tomi,

On Wednesday 12 March 2014 12:47:09 Tomi Valkeinen wrote:
 On 12/03/14 12:25, Russell King - ARM Linux wrote:
  On Mon, Mar 10, 2014 at 02:52:53PM +0100, Laurent Pinchart wrote:
  In theory unidirectional links in DT are indeed enough. However, let's
  not forget the following.
  
  - There's no such thing as single start points for graphs. Sure, in some
  simple cases the graph will have a single start point, but that's not a
  generic rule. For instance the camera graphs
  http://ideasonboard.org/media/omap3isp.ps and
  http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus
  two starting points from a data flow point of view.
  
  I think we need to stop thinking of a graph linked in terms of data
  flow - that's really not useful.
  
  Consider a display subsystem.  The CRTC is the primary interface for
  the CPU - this is the most interesting interface, it's the interface
  which provides access to the picture to be displayed for the CPU.  Other
  interfaces are secondary to that purpose - reading the I2C DDC bus for
  the display information is all secondary to the primary purpose of
  displaying a picture.
  
  For a capture subsystem, the primary interface for the CPU is the frame
  grabber (whether it be an already encoded frame or not.)  The sensor
  devices are all secondary to that.
  
  So, the primary software interface in each case is where the data for
  the primary purpose is transferred.  This is the point at which these
  graphs should commence since this is where we would normally start
  enumeration of the secondary interfaces.
  
  V4L2 even provides interfaces for this: you open the capture device,
  which then allows you to enumerate the capture device's inputs, and
  this in turn allows you to enumerate their properties.  You don't open
  a particular sensor and work back up the tree.

Please note that this has partly changed a couple of years ago with the 
introduction of the media controller framework. Userspace now opens a logical 
media device that describes the topology of the hardware, and then accesses 
individual components directly, from sensor to DMA engine.

 We do it the other way around in OMAP DSS. It's the displays the user is
 interested in, so we enumerate the displays, and if the user wants to
 enable a display, we then follow the links from the display towards the
 SoC, configuring and enabling the components on the way.

The logical view of a device from a CPU perspective evolves over time, as APIs 
are refactored or created to support new hardware that comes with new 
paradigms and additional complexity. The hardware data flow direction, 
however, doesn't change. Only modeling the data flow direction in DT might be 
tempting but is probably too hasty of a conclusion though : if DT should model 
the hardware, it ends up modeling a logical view of the hardware, and is thus 
not as closed as one might believe.

In the particular case of display devices I believe that using the data flow 
direction for links (assuming we can't use bidirectional links in DT) is a 
good model. It would allow parsing the whole graph at a reasonable cost (still 
higher than with bidirectional links) while making clear how to represent 
links. Let's not forgot that with more complex devices not all components can 
be referenced directly from the CPU-side display controller.

 I don't have a strong opinion on the direction, I think both have their
 pros. In any case, that's more of a driver model thing, and I'm fine
 with linking in the DT outwards from the SoC (presuming the
 double-linking is not ok, which I still like best).
 
  I believe trying to do this according to the flow of data is just wrong.
  You should always describe things from the primary device for the CPU
  towards the peripheral devices and never the opposite direction.
 
 In that case there's possibly the issue I mentioned in other email in
 this thread: an encoder can be used in both a display and a capture
 pipeline. Describing the links outwards from CPU means that sometimes
 the encoder's input port is pointed at, and sometimes the encoder's
 output port is pointed at.
 
 That's possibly ok, but I think Grant was of the opinion that things
 should be explicitly described in the binding documentation: either a
 device's port must contain a 'remote-endpoint' property, or it must not,
 but no sometimes. But maybe I took his words too literally.
 
 Then there's also the audio example Philipp mentioned, where there is no
 clear outward from Soc direction for the link, as the link was
 bi-directional and between two non-SoC devices.

Even if the link was unidirectional the outward from SoC direction isn't 
always defined for links between non-SoC devices.

-- 
Regards,

Laurent Pinchart


signature.asc
Description: This is a digitally signed message part.


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-11 Thread Philipp Zabel
Hi Laurent,

Am Dienstag, den 11.03.2014, 16:21 +0100 schrieb Laurent Pinchart:
> Hi Philipp,
> 
> On Tuesday 11 March 2014 16:07:00 Philipp Zabel wrote:
> > Am Montag, den 10.03.2014, 14:58 + schrieb Grant Likely:
> > > On Mon, 10 Mar 2014 14:52:53 +0100, Laurent Pinchart wrote:
> 
> [snip]
> 
> > > > In theory unidirectional links in DT are indeed enough. However, let's
> > > > not forget the following.
> > > > 
> > > > - There's no such thing as single start points for graphs. Sure, in some
> > > > simple cases the graph will have a single start point, but that's not a
> > > > generic rule. For instance the camera graphs
> > > > http://ideasonboard.org/media/omap3isp.ps and
> > > > http://ideasonboard.org/media/eyecam.ps have two camera sensors, and
> > > > thus two starting points from a data flow point of view. And if you
> > > > want a better understanding of how complex media graphs can become,
> > > > have a look at http://ideasonboard.org/media/vsp1.0.pdf (that's a real
> > > > world example, albeit all connections are internal to the SoC in that
> > > > particular case, and don't need to be described in DT).
> > > > 
> > > > - There's also no such thing as a master device that can just point to
> > > > slave devices. Once again simple cases exist where that model could
> > > > work, but real world examples exist of complex pipelines with dozens of
> > > > elements all implemented by a separate IP core and handled by separate
> > > > drivers, forming a graph with long chains and branches. We thus need
> > > > real graph bindings.
> > > > 
> > > > - Finally, having no backlinks in DT would make the software
> > > > implementation very complex. We need to be able to walk the graph in a
> > > > generic way without having any of the IP core drivers loaded, and
> > > > without any specific starting point. We would thus need to parse the
> > > > complete DT tree, looking at all nodes and trying to find out whether
> > > > they're part of the graph we're trying to walk. The complexity of the
> > > > operation would be at best quadratic to the number of nodes in the whole
> > > > DT and to the number of nodes in the graph.
> > > 
> > > Not really. To being with, you cannot determine any meaning of a node
> > > across the tree (aside from it being an endpoint) without also
> > > understanding the binding that the node is a part of. That means you
> > > need to have something matching against the compatible string on both
> > > ends of the linkage. For instance:
> > > 
> > > panel {
> > >   compatible = "acme,lvds-panel";
> > >   lvds-port: port {
> > >   };
> > > };
> > > 
> > > display-controller {
> > >   compatible = "encom,video";
> > >   port {
> > >   remote-endpoint = <>;
> > >   };
> > > };
> > > 
> > > In the above example, the encom,video driver has absolutely zero
> > > information about what the acme,lvds-panel binding actually implements.
> > > There needs to be both a driver for the "acme,lvds-panel" binding and
> > > one for the "encom,video" binding (even if the acme,lvds-panel binding
> > > is very thin and defers the functionality to the video controller).
> > > 
> > > What you want here is the drivers to register each side of the
> > > connection. That could be modeled with something like the following
> > > (pseudocode):
> > > 
> > > struct of_endpoint {
> > > 
> > >   struct list_head list;
> > >   struct device_node *ep_node;
> > >   void *context;
> > >   void (*cb)(struct of_endpoint *ep, void *data);
> > > 
> > > }
> > > 
> > > int of_register_port(struct device *node, void (*cb)(struct of_endpoint
> > > *ep, void *data), void *data) {
> > > 
> > >   struct of_endpoint *ep = kzalloc(sizeof(*ep), GFP_KERNEL);
> > >   
> > >   ep->ep_node = node;
> > >   ep->data = data;
> > >   ep->callback = cb;
> > >   
> > >   /* store the endpoint to a list */
> > >   /* check if the endpoint has a remote-endpoint link */
> > >   /* If so, then link the two together and call the
> > >* callbacks */
> > > }
> > > 
> > > That's neither expensive or complicated.
> > > 
> > > Originally I suggested walking the whole tree multiple times, but as
> > > mentioned that doesn't scale, and as I thought about the above it isn't
> > > even a valid thing to do. Everything has to be driven by drivers, so
> > > even if the backlinks are there, nothing can be done with the link until
> > > the other side goes through enumeration independently.
> > 
> > I have implemented your suggestion as follows. Basically, this allows
> > either endpoint to contain the remote-endpoint link, as long as all
> > drivers register their endpoints in the probe function and return
> > -EPROBE_DEFER from their component framework bind callback until all
> > their endpoints are connected.
> 
> Beside bringing the whole graph down when a single component can't be probed 
> (either because the corresponding hardware devices is missing, broken, or the 
> driver isn't loaded), that's adding even one more 

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-11 Thread Laurent Pinchart
Hi Philipp,

On Tuesday 11 March 2014 16:07:00 Philipp Zabel wrote:
> Am Montag, den 10.03.2014, 14:58 + schrieb Grant Likely:
> > On Mon, 10 Mar 2014 14:52:53 +0100, Laurent Pinchart wrote:

[snip]

> > > In theory unidirectional links in DT are indeed enough. However, let's
> > > not forget the following.
> > > 
> > > - There's no such thing as single start points for graphs. Sure, in some
> > > simple cases the graph will have a single start point, but that's not a
> > > generic rule. For instance the camera graphs
> > > http://ideasonboard.org/media/omap3isp.ps and
> > > http://ideasonboard.org/media/eyecam.ps have two camera sensors, and
> > > thus two starting points from a data flow point of view. And if you
> > > want a better understanding of how complex media graphs can become,
> > > have a look at http://ideasonboard.org/media/vsp1.0.pdf (that's a real
> > > world example, albeit all connections are internal to the SoC in that
> > > particular case, and don't need to be described in DT).
> > > 
> > > - There's also no such thing as a master device that can just point to
> > > slave devices. Once again simple cases exist where that model could
> > > work, but real world examples exist of complex pipelines with dozens of
> > > elements all implemented by a separate IP core and handled by separate
> > > drivers, forming a graph with long chains and branches. We thus need
> > > real graph bindings.
> > > 
> > > - Finally, having no backlinks in DT would make the software
> > > implementation very complex. We need to be able to walk the graph in a
> > > generic way without having any of the IP core drivers loaded, and
> > > without any specific starting point. We would thus need to parse the
> > > complete DT tree, looking at all nodes and trying to find out whether
> > > they're part of the graph we're trying to walk. The complexity of the
> > > operation would be at best quadratic to the number of nodes in the whole
> > > DT and to the number of nodes in the graph.
> > 
> > Not really. To being with, you cannot determine any meaning of a node
> > across the tree (aside from it being an endpoint) without also
> > understanding the binding that the node is a part of. That means you
> > need to have something matching against the compatible string on both
> > ends of the linkage. For instance:
> > 
> > panel {
> > compatible = "acme,lvds-panel";
> > lvds-port: port {
> > };
> > };
> > 
> > display-controller {
> > compatible = "encom,video";
> > port {
> > remote-endpoint = <>;
> > };
> > };
> > 
> > In the above example, the encom,video driver has absolutely zero
> > information about what the acme,lvds-panel binding actually implements.
> > There needs to be both a driver for the "acme,lvds-panel" binding and
> > one for the "encom,video" binding (even if the acme,lvds-panel binding
> > is very thin and defers the functionality to the video controller).
> > 
> > What you want here is the drivers to register each side of the
> > connection. That could be modeled with something like the following
> > (pseudocode):
> > 
> > struct of_endpoint {
> > 
> > struct list_head list;
> > struct device_node *ep_node;
> > void *context;
> > void (*cb)(struct of_endpoint *ep, void *data);
> > 
> > }
> > 
> > int of_register_port(struct device *node, void (*cb)(struct of_endpoint
> > *ep, void *data), void *data) {
> > 
> > struct of_endpoint *ep = kzalloc(sizeof(*ep), GFP_KERNEL);
> > 
> > ep->ep_node = node;
> > ep->data = data;
> > ep->callback = cb;
> > 
> > /* store the endpoint to a list */
> > /* check if the endpoint has a remote-endpoint link */
> > /* If so, then link the two together and call the
> >  * callbacks */
> > }
> > 
> > That's neither expensive or complicated.
> > 
> > Originally I suggested walking the whole tree multiple times, but as
> > mentioned that doesn't scale, and as I thought about the above it isn't
> > even a valid thing to do. Everything has to be driven by drivers, so
> > even if the backlinks are there, nothing can be done with the link until
> > the other side goes through enumeration independently.
> 
> I have implemented your suggestion as follows. Basically, this allows
> either endpoint to contain the remote-endpoint link, as long as all
> drivers register their endpoints in the probe function and return
> -EPROBE_DEFER from their component framework bind callback until all
> their endpoints are connected.

Beside bringing the whole graph down when a single component can't be probed 
(either because the corresponding hardware devices is missing, broken, or the 
driver isn't loaded), that's adding even one more level of complexity with an 
additional callback. I'm afraid I can't accept it as-is, the result is just 
too complex for device drivers and not flexible enough.

I want to keep the ability to walk the graph without requiring all components 
to be probed 

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-11 Thread Philipp Zabel
Hi Grant,

Am Montag, den 10.03.2014, 14:58 + schrieb Grant Likely:
> On Mon, 10 Mar 2014 14:52:53 +0100, Laurent Pinchart 
>  wrote:
> > On Monday 10 March 2014 12:18:20 Tomi Valkeinen wrote:
> > > On 08/03/14 13:41, Grant Likely wrote:
> > > >> Ok. If we go for single directional link, the question is then: which
> > > >> way? And is the direction different for display and camera, which are
> > > >> kind of reflections of each other?
> > > > 
> > > > In general I would recommend choosing whichever device you would
> > > > sensibly think of as a master. In the camera case I would choose the
> > > > camera controller node instead of the camera itself, and in the display
> > > > case I would choose the display controller instead of the panel. The
> > > > binding author needs to choose what she things makes the most sense, but
> > > > drivers can still use if it it turns out to be 'backwards'
> > > 
> > > I would perhaps choose the same approach, but at the same time I think
> > > it's all but clear. The display controller doesn't control the panel any
> > > more than a DMA controller controls, say, the display controller.
> > > 
> > > In fact, in earlier versions of OMAP DSS DT support I had a simpler port
> > > description, and in that I had the panel as the master (i.e. link from
> > > panel to dispc) because the panel driver uses the display controller's
> > > features to provide the panel device a data stream.
> > > 
> > > And even with the current OMAP DSS DT version, which uses the v4l2 style
> > > ports/endpoints, the driver model is still the same, and only links
> > > towards upstream are used.
> > > 
> > > So one reason I'm happy with the dual-linking is that I can easily
> > > follow the links from the downstream entities to upstream entities, and
> > > other people, who have different driver model, can easily do the opposite.
> > > 
> > > But I agree that single-linking is enough and this can be handled at
> > > runtime, even if it makes the code more complex. And perhaps requires
> > > extra data in the dts, to give the start points for the graph.
> > 
> > In theory unidirectional links in DT are indeed enough. However, let's not 
> > forget the following.
> > 
> > - There's no such thing as single start points for graphs. Sure, in some 
> > simple cases the graph will have a single start point, but that's not a 
> > generic rule. For instance the camera graphs 
> > http://ideasonboard.org/media/omap3isp.ps and 
> > http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus 
> > two 
> > starting points from a data flow point of view. And if you want a better 
> > understanding of how complex media graphs can become, have a look at 
> > http://ideasonboard.org/media/vsp1.0.pdf (that's a real world example, 
> > albeit 
> > all connections are internal to the SoC in that particular case, and don't 
> > need to be described in DT).
> > 
> > - There's also no such thing as a master device that can just point to 
> > slave 
> > devices. Once again simple cases exist where that model could work, but 
> > real 
> > world examples exist of complex pipelines with dozens of elements all 
> > implemented by a separate IP core and handled by separate drivers, forming 
> > a 
> > graph with long chains and branches. We thus need real graph bindings.
> > 
> > - Finally, having no backlinks in DT would make the software implementation 
> > very complex. We need to be able to walk the graph in a generic way without 
> > having any of the IP core drivers loaded, and without any specific starting 
> > point. We would thus need to parse the complete DT tree, looking at all 
> > nodes 
> > and trying to find out whether they're part of the graph we're trying to 
> > walk. 
> > The complexity of the operation would be at best quadratic to the number of 
> > nodes in the whole DT and to the number of nodes in the graph.
> 
> Not really. To being with, you cannot determine any meaning of a node
> across the tree (aside from it being an endpoint) without also
> understanding the binding that the node is a part of. That means you
> need to have something matching against the compatible string on both
> ends of the linkage. For instance:
> 
> panel {
>   compatible = "acme,lvds-panel";
>   lvds-port: port {
>   };
> };
> 
> display-controller {
>   compatible = "encom,video";
>   port {
>   remote-endpoint = <>;
>   };
> };
> 
> In the above example, the encom,video driver has absolutely zero
> information about what the acme,lvds-panel binding actually implements.
> There needs to be both a driver for the "acme,lvds-panel" binding and
> one for the "encom,video" binding (even if the acme,lvds-panel binding
> is very thin and defers the functionality to the video controller).
> 
> What you want here is the drivers to register each side of the
> connection. That could be modeled with something like the following
> (pseudocode):
> 
> struct of_endpoint {
>   

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-11 Thread Philipp Zabel
Hi,

Am Dienstag, den 11.03.2014, 15:27 +0200 schrieb Tomi Valkeinen:
> On 11/03/14 15:16, Laurent Pinchart wrote:
> 
> >> And if I gathered Grant's opinion correctly (correct me if I'm wrong),
> >> he thinks things should be explicit, i.e. the bindings for, say, an
> >> encoder should state that the encoder's output endpoint _must_ contain a
> >> remote-endpoint property, whereas the encoder's input endpoint _must
> >> not_ contain a remote-endpoint property.
> > 
> > Actually my understand was that DT links would have the same direction as 
> > the 
> > data flow. There would be no ambiguity in that case as the direction of the 
> 
> Ok. At least at some point in the discussion the rule of thumb was to
> have the "master" point at the "slave", which are a bit vague terms, but
> I think both display and camera controllers were given as examples of
> "master".
> 
> > data flow is known. What happens for bidirectional data flows still need to 
> > be 
> > discussed though. And if we want to use the of-graph bindings to describe 
> > graphs without a data flow, a decision will need to be taken there too.
> 
> Yep. My worry is that if the link direction is defined in the bindings
> for the device, somebody will at some point have a complex board which
> happens to use two devices in such a way, that either neither of them
> point to each other, or both point to each other.
> 
> Maybe we can make sure that never happens, but I feel a bit nervous
> about it especially for bi-directional cases. If the link has no clear
> main-dataflow direction, it may be a bit difficult to say which way to link.
> 
> With double-linking all those concerns just disappear.

another possibility would be to just leave it open in the generic
bindings:

"Two endpoints are considered linked together if any of them contains a
 'remote-endpoint' phandle property that points to the other. If both
 contain a 'remote-endpoint' property, both must point at each other."

As long as we make sure that the generic code is able to generate the
undirected graph from this, we could then let more specific bindings
(like video-interfaces.txt) define that the phandle links always should
point in both directions, in data flow direction, north, or outwards
from the 'master'.

regards
Philipp

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-11 Thread Tomi Valkeinen
On 11/03/14 15:16, Laurent Pinchart wrote:

>> And if I gathered Grant's opinion correctly (correct me if I'm wrong),
>> he thinks things should be explicit, i.e. the bindings for, say, an
>> encoder should state that the encoder's output endpoint _must_ contain a
>> remote-endpoint property, whereas the encoder's input endpoint _must
>> not_ contain a remote-endpoint property.
> 
> Actually my understand was that DT links would have the same direction as the 
> data flow. There would be no ambiguity in that case as the direction of the 

Ok. At least at some point in the discussion the rule of thumb was to
have the "master" point at the "slave", which are a bit vague terms, but
I think both display and camera controllers were given as examples of
"master".

> data flow is known. What happens for bidirectional data flows still need to 
> be 
> discussed though. And if we want to use the of-graph bindings to describe 
> graphs without a data flow, a decision will need to be taken there too.

Yep. My worry is that if the link direction is defined in the bindings
for the device, somebody will at some point have a complex board which
happens to use two devices in such a way, that either neither of them
point to each other, or both point to each other.

Maybe we can make sure that never happens, but I feel a bit nervous
about it especially for bi-directional cases. If the link has no clear
main-dataflow direction, it may be a bit difficult to say which way to link.

With double-linking all those concerns just disappear.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-11 Thread Laurent Pinchart
Hi Tomi,

On Tuesday 11 March 2014 14:59:20 Tomi Valkeinen wrote:
> On 11/03/14 13:43, Laurent Pinchart wrote:
> >> We could scan the whole tree for entities, ports and endpoints once, in
> >> the base oftree code, and put that into a graph structure, adding the
> >> backlinks. The of_graph_* helpers could then use that graph instead of
> >> the device tree.
> > 
> > That could work. The complexity would still be quadratic, but we would
> > parse the full device tree once only.
> > 
> > The runtime complexity would still be increased, as the graph helpers
> > would need to find the endpoint object in the parsed graph corresponding
> > to the DT node they get as an argument. That's proportional to the number
> > of graph elements, not the total number of DT nodes, so I suppose it's not
> > too bad.
> > 
> > We also need to make sure this would work with insertion of DT fragments
> > at runtime. Probably not a big deal, but it has to be kept in mind.
> 
> About the endpoint linking direction... As I think was suggested, the
> base logic would be to make endpoints point "outward" from the SoC, i.e.
> a display controller would point to a panel, and a capture controller
> would point to a sensor.
>
> But how about this case:
> 
> I have a simple video pipeline with a display controller, an encoder and
> a panel, as follows:
> 
> dispc -> encoder -> panel
> 
> Here the arrows show which way the remote-endpoint links point. So
> looking at the encoder, the encoder's input port is pointed at by the
> dispc, and the encoder's output port points at the panel.
> 
> Then, I have a capture pipeline, with a capture controller, an encoder
> (the same one that was used for display above) and a sensor, as follows:
> 
> camc -> encoder -> sensor
> 
> Again the arrows show the links. Note that here the encoder's _output_
> port is pointed at by the camc, and the encoder's _input_ port points at
> the sensor.
> 
> So depending on the use case, the endpoints would point to opposite
> direction from the encoder's point of view.
> 
> And if I gathered Grant's opinion correctly (correct me if I'm wrong),
> he thinks things should be explicit, i.e. the bindings for, say, an
> encoder should state that the encoder's output endpoint _must_ contain a
> remote-endpoint property, whereas the encoder's input endpoint _must
> not_ contain a remote-endpoint property.

Actually my understand was that DT links would have the same direction as the 
data flow. There would be no ambiguity in that case as the direction of the 
data flow is known. What happens for bidirectional data flows still need to be 
discussed though. And if we want to use the of-graph bindings to describe 
graphs without a data flow, a decision will need to be taken there too.

-- 
Regards,

Laurent Pinchart


signature.asc
Description: This is a digitally signed message part.


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-11 Thread Andrzej Hajda
On 03/10/2014 04:15 PM, Laurent Pinchart wrote:
> Hi Grant,
> 
> On Monday 10 March 2014 14:58:15 Grant Likely wrote:
>> On Mon, 10 Mar 2014 14:52:53 +0100, Laurent Pinchart wrote:
>>> On Monday 10 March 2014 12:18:20 Tomi Valkeinen wrote:
 On 08/03/14 13:41, Grant Likely wrote:
>> Ok. If we go for single directional link, the question is then: which
>> way? And is the direction different for display and camera, which are
>> kind of reflections of each other?
>
> In general I would recommend choosing whichever device you would
> sensibly think of as a master. In the camera case I would choose the
> camera controller node instead of the camera itself, and in the
> display case I would choose the display controller instead of the
> panel. The binding author needs to choose what she things makes the
> most sense, but drivers can still use if it it turns out to be
> 'backwards'

 I would perhaps choose the same approach, but at the same time I think
 it's all but clear. The display controller doesn't control the panel any
 more than a DMA controller controls, say, the display controller.

 In fact, in earlier versions of OMAP DSS DT support I had a simpler port
 description, and in that I had the panel as the master (i.e. link from
 panel to dispc) because the panel driver uses the display controller's
 features to provide the panel device a data stream.

 And even with the current OMAP DSS DT version, which uses the v4l2 style
 ports/endpoints, the driver model is still the same, and only links
 towards upstream are used.

 So one reason I'm happy with the dual-linking is that I can easily
 follow the links from the downstream entities to upstream entities, and
 other people, who have different driver model, can easily do the
 opposite.

 But I agree that single-linking is enough and this can be handled at
 runtime, even if it makes the code more complex. And perhaps requires
 extra data in the dts, to give the start points for the graph.
>>>
>>> In theory unidirectional links in DT are indeed enough. However, let's not
>>> forget the following.
>>>
>>> - There's no such thing as single start points for graphs. Sure, in some
>>> simple cases the graph will have a single start point, but that's not a
>>> generic rule. For instance the camera graphs
>>> http://ideasonboard.org/media/omap3isp.ps and
>>> http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus
>>> two starting points from a data flow point of view. And if you want a
>>> better understanding of how complex media graphs can become, have a look
>>> at http://ideasonboard.org/media/vsp1.0.pdf (that's a real world example,
>>> albeit all connections are internal to the SoC in that particular case,
>>> and don't need to be described in DT).
>>>
>>> - There's also no such thing as a master device that can just point to
>>> slave devices. Once again simple cases exist where that model could work,
>>> but real world examples exist of complex pipelines with dozens of
>>> elements all implemented by a separate IP core and handled by separate
>>> drivers, forming a graph with long chains and branches. We thus need real
>>> graph bindings.
>>>
>>> - Finally, having no backlinks in DT would make the software
>>> implementation very complex. We need to be able to walk the graph in a
>>> generic way without having any of the IP core drivers loaded, and without
>>> any specific starting point. We would thus need to parse the complete DT
>>> tree, looking at all nodes and trying to find out whether they're part of
>>> the graph we're trying to walk. The complexity of the operation would be
>>> at best quadratic to the number of nodes in the whole DT and to the number
>>> of nodes in the graph.
>>
>> Not really. To being with, you cannot determine any meaning of a node
>> across the tree (aside from it being an endpoint)
> 
> That's the important part. I can assume the target node of the 
> remote-endpoint 
> phandle to be an endpoint, and can thus assume that it implements the 
> of-graph 
> bindings. That's all I need to be able to walk the graph in a generic way.
> 
>> without also understanding the binding that the node is a part of. That
>> means you need to have something matching against the compatible string on
>> both ends of the linkage. For instance:
>>
>> panel {
>>  compatible = "acme,lvds-panel";
>>  lvds-port: port {
>>  };
>> };
>>
>> display-controller {
>>  compatible = "encom,video";
>>  port {
>>  remote-endpoint = <>;
>>  };
>> };
>>
>> In the above example, the encom,video driver has absolutely zero
>> information about what the acme,lvds-panel binding actually implements.
>> There needs to be both a driver for the "acme,lvds-panel" binding and
>> one for the "encom,video" binding (even if the acme,lvds-panel binding
>> is very thin and defers the 

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-11 Thread Tomi Valkeinen
On 11/03/14 13:43, Laurent Pinchart wrote:

>> We could scan the whole tree for entities, ports and endpoints once, in
>> the base oftree code, and put that into a graph structure, adding the
>> backlinks.
>> The of_graph_* helpers could then use that graph instead of the device
>> tree.
> 
> That could work. The complexity would still be quadratic, but we would parse 
> the full device tree once only.
> 
> The runtime complexity would still be increased, as the graph helpers would 
> need to find the endpoint object in the parsed graph corresponding to the DT 
> node they get as an argument. That's proportional to the number of graph 
> elements, not the total number of DT nodes, so I suppose it's not too bad.
> 
> We also need to make sure this would work with insertion of DT fragments at 
> runtime. Probably not a big deal, but it has to be kept in mind.

About the endpoint linking direction... As I think was suggested, the
base logic would be to make endpoints point "outward" from the SoC, i.e.
a display controller would point to a panel, and a capture controller
would point to a sensor.

But how about this case:

I have a simple video pipeline with a display controller, an encoder and
a panel, as follows:

dispc -> encoder -> panel

Here the arrows show which way the remote-endpoint links point. So
looking at the encoder, the encoder's input port is pointed at by the
dispc, and the encoder's output port points at the panel.

Then, I have a capture pipeline, with a capture controller, an encoder
(the same one that was used for display above) and a sensor, as follows:

camc -> encoder -> sensor

Again the arrows show the links. Note that here the encoder's _output_
port is pointed at by the camc, and the encoder's _input_ port points at
the sensor.

So depending on the use case, the endpoints would point to opposite
direction from the encoder's point of view.

And if I gathered Grant's opinion correctly (correct me if I'm wrong),
he thinks things should be explicit, i.e. the bindings for, say, an
encoder should state that the encoder's output endpoint _must_ contain a
remote-endpoint property, whereas the encoder's input endpoint _must
not_ contain a remote-endpoint property.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-11 Thread Laurent Pinchart
Hi Philipp,

On Monday 10 March 2014 16:40:30 Philipp Zabel wrote:
> Am Montag, den 10.03.2014, 16:15 +0100 schrieb Laurent Pinchart:
> > On Monday 10 March 2014 14:58:15 Grant Likely wrote:
> > > On Mon, 10 Mar 2014 14:52:53 +0100, Laurent Pinchart wrote:
> > > > On Monday 10 March 2014 12:18:20 Tomi Valkeinen wrote:
> > > > > On 08/03/14 13:41, Grant Likely wrote:
> > > > > >> Ok. If we go for single directional link, the question is then:
> > > > > >> which way? And is the direction different for display and camera,
> > > > > >> which are kind of reflections of each other?
> > > > > > 
> > > > > > In general I would recommend choosing whichever device you would
> > > > > > sensibly think of as a master. In the camera case I would choose
> > > > > > the camera controller node instead of the camera itself, and in
> > > > > > the display case I would choose the display controller instead of
> > > > > > the panel. The binding author needs to choose what she things
> > > > > > makes the most sense, but drivers can still use if it it turns out
> > > > > > to be 'backwards'
> > > > > 
> > > > > I would perhaps choose the same approach, but at the same time I
> > > > > think it's all but clear. The display controller doesn't control the
> > > > > panel any more than a DMA controller controls, say, the display
> > > > > controller.
> > > > > 
> > > > > In fact, in earlier versions of OMAP DSS DT support I had a simpler
> > > > > port description, and in that I had the panel as the master (i.e.
> > > > > link from panel to dispc) because the panel driver uses the display
> > > > > controller's features to provide the panel device a data stream.
> > > > > 
> > > > > And even with the current OMAP DSS DT version, which uses the v4l2
> > > > > style ports/endpoints, the driver model is still the same, and only
> > > > > links towards upstream are used.
> > > > > 
> > > > > So one reason I'm happy with the dual-linking is that I can easily
> > > > > follow the links from the downstream entities to upstream entities,
> > > > > and other people, who have different driver model, can easily do the
> > > > > opposite.
> > > > > 
> > > > > But I agree that single-linking is enough and this can be handled at
> > > > > runtime, even if it makes the code more complex. And perhaps
> > > > > requires extra data in the dts, to give the start points for the
> > > > > graph.
> > > > 
> > > > In theory unidirectional links in DT are indeed enough. However, let's
> > > > not forget the following.
> > > > 
> > > > - There's no such thing as single start points for graphs. Sure, in
> > > > some simple cases the graph will have a single start point, but that's
> > > > not a generic rule. For instance the camera graphs
> > > > http://ideasonboard.org/media/omap3isp.ps and
> > > > http://ideasonboard.org/media/eyecam.ps have two camera sensors, and
> > > > thus two starting points from a data flow point of view. And if you
> > > > want a better understanding of how complex media graphs can become,
> > > > have a look at http://ideasonboard.org/media/vsp1.0.pdf (that's a real
> > > > world example, albeit all connections are internal to the SoC in that
> > > > particular case, and don't need to be described in DT).
> > > > 
> > > > - There's also no such thing as a master device that can just point to
> > > > slave devices. Once again simple cases exist where that model could
> > > > work, but real world examples exist of complex pipelines with dozens
> > > > of elements all implemented by a separate IP core and handled by
> > > > separate drivers, forming a graph with long chains and branches. We
> > > > thus need real graph bindings.
> > > > 
> > > > - Finally, having no backlinks in DT would make the software
> > > > implementation very complex. We need to be able to walk the graph in a
> > > > generic way without having any of the IP core drivers loaded, and
> > > > without any specific starting point. We would thus need to parse the
> > > > complete DT tree, looking at all nodes and trying to find out whether
> > > > they're part of the graph we're trying to walk. The complexity of the
> > > > operation would be at best quadratic to the number of nodes in the
> > > > whole DT and to the number of nodes in the graph.
> > > 
> > > Not really. To being with, you cannot determine any meaning of a node
> > > across the tree (aside from it being an endpoint)
> > 
> > That's the important part. I can assume the target node of the
> > remote-endpoint phandle to be an endpoint, and can thus assume that it
> > implements the of-graph bindings. That's all I need to be able to walk
> > the graph in a generic way.
> >
> > > without also understanding the binding that the node is a part of. That
> > > means you need to have something matching against the compatible string
> > > on both ends of the linkage. For instance:
> > > 
> > > panel {
> > >   compatible = "acme,lvds-panel";
> > >   lvds-port: port {
> > >   };
> > > };
> > > 
> > > 

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-11 Thread Laurent Pinchart
Hi Philipp,

On Monday 10 March 2014 16:40:30 Philipp Zabel wrote:
 Am Montag, den 10.03.2014, 16:15 +0100 schrieb Laurent Pinchart:
  On Monday 10 March 2014 14:58:15 Grant Likely wrote:
   On Mon, 10 Mar 2014 14:52:53 +0100, Laurent Pinchart wrote:
On Monday 10 March 2014 12:18:20 Tomi Valkeinen wrote:
 On 08/03/14 13:41, Grant Likely wrote:
  Ok. If we go for single directional link, the question is then:
  which way? And is the direction different for display and camera,
  which are kind of reflections of each other?
  
  In general I would recommend choosing whichever device you would
  sensibly think of as a master. In the camera case I would choose
  the camera controller node instead of the camera itself, and in
  the display case I would choose the display controller instead of
  the panel. The binding author needs to choose what she things
  makes the most sense, but drivers can still use if it it turns out
  to be 'backwards'
 
 I would perhaps choose the same approach, but at the same time I
 think it's all but clear. The display controller doesn't control the
 panel any more than a DMA controller controls, say, the display
 controller.
 
 In fact, in earlier versions of OMAP DSS DT support I had a simpler
 port description, and in that I had the panel as the master (i.e.
 link from panel to dispc) because the panel driver uses the display
 controller's features to provide the panel device a data stream.
 
 And even with the current OMAP DSS DT version, which uses the v4l2
 style ports/endpoints, the driver model is still the same, and only
 links towards upstream are used.
 
 So one reason I'm happy with the dual-linking is that I can easily
 follow the links from the downstream entities to upstream entities,
 and other people, who have different driver model, can easily do the
 opposite.
 
 But I agree that single-linking is enough and this can be handled at
 runtime, even if it makes the code more complex. And perhaps
 requires extra data in the dts, to give the start points for the
 graph.

In theory unidirectional links in DT are indeed enough. However, let's
not forget the following.

- There's no such thing as single start points for graphs. Sure, in
some simple cases the graph will have a single start point, but that's
not a generic rule. For instance the camera graphs
http://ideasonboard.org/media/omap3isp.ps and
http://ideasonboard.org/media/eyecam.ps have two camera sensors, and
thus two starting points from a data flow point of view. And if you
want a better understanding of how complex media graphs can become,
have a look at http://ideasonboard.org/media/vsp1.0.pdf (that's a real
world example, albeit all connections are internal to the SoC in that
particular case, and don't need to be described in DT).

- There's also no such thing as a master device that can just point to
slave devices. Once again simple cases exist where that model could
work, but real world examples exist of complex pipelines with dozens
of elements all implemented by a separate IP core and handled by
separate drivers, forming a graph with long chains and branches. We
thus need real graph bindings.

- Finally, having no backlinks in DT would make the software
implementation very complex. We need to be able to walk the graph in a
generic way without having any of the IP core drivers loaded, and
without any specific starting point. We would thus need to parse the
complete DT tree, looking at all nodes and trying to find out whether
they're part of the graph we're trying to walk. The complexity of the
operation would be at best quadratic to the number of nodes in the
whole DT and to the number of nodes in the graph.
   
   Not really. To being with, you cannot determine any meaning of a node
   across the tree (aside from it being an endpoint)
  
  That's the important part. I can assume the target node of the
  remote-endpoint phandle to be an endpoint, and can thus assume that it
  implements the of-graph bindings. That's all I need to be able to walk
  the graph in a generic way.
 
   without also understanding the binding that the node is a part of. That
   means you need to have something matching against the compatible string
   on both ends of the linkage. For instance:
   
   panel {
 compatible = acme,lvds-panel;
 lvds-port: port {
 };
   };
   
   display-controller {
 compatible = encom,video;
 port {
 remote-endpoint = lvds-port;
 };
   };
   
   In the above example, the encom,video driver has absolutely zero
   information about what the acme,lvds-panel binding actually implements.
   There needs to be both a driver for the acme,lvds-panel binding and
   one for the encom,video binding (even if the 

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-11 Thread Tomi Valkeinen
On 11/03/14 13:43, Laurent Pinchart wrote:

 We could scan the whole tree for entities, ports and endpoints once, in
 the base oftree code, and put that into a graph structure, adding the
 backlinks.
 The of_graph_* helpers could then use that graph instead of the device
 tree.
 
 That could work. The complexity would still be quadratic, but we would parse 
 the full device tree once only.
 
 The runtime complexity would still be increased, as the graph helpers would 
 need to find the endpoint object in the parsed graph corresponding to the DT 
 node they get as an argument. That's proportional to the number of graph 
 elements, not the total number of DT nodes, so I suppose it's not too bad.
 
 We also need to make sure this would work with insertion of DT fragments at 
 runtime. Probably not a big deal, but it has to be kept in mind.

About the endpoint linking direction... As I think was suggested, the
base logic would be to make endpoints point outward from the SoC, i.e.
a display controller would point to a panel, and a capture controller
would point to a sensor.

But how about this case:

I have a simple video pipeline with a display controller, an encoder and
a panel, as follows:

dispc - encoder - panel

Here the arrows show which way the remote-endpoint links point. So
looking at the encoder, the encoder's input port is pointed at by the
dispc, and the encoder's output port points at the panel.

Then, I have a capture pipeline, with a capture controller, an encoder
(the same one that was used for display above) and a sensor, as follows:

camc - encoder - sensor

Again the arrows show the links. Note that here the encoder's _output_
port is pointed at by the camc, and the encoder's _input_ port points at
the sensor.

So depending on the use case, the endpoints would point to opposite
direction from the encoder's point of view.

And if I gathered Grant's opinion correctly (correct me if I'm wrong),
he thinks things should be explicit, i.e. the bindings for, say, an
encoder should state that the encoder's output endpoint _must_ contain a
remote-endpoint property, whereas the encoder's input endpoint _must
not_ contain a remote-endpoint property.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-11 Thread Andrzej Hajda
On 03/10/2014 04:15 PM, Laurent Pinchart wrote:
 Hi Grant,
 
 On Monday 10 March 2014 14:58:15 Grant Likely wrote:
 On Mon, 10 Mar 2014 14:52:53 +0100, Laurent Pinchart wrote:
 On Monday 10 March 2014 12:18:20 Tomi Valkeinen wrote:
 On 08/03/14 13:41, Grant Likely wrote:
 Ok. If we go for single directional link, the question is then: which
 way? And is the direction different for display and camera, which are
 kind of reflections of each other?

 In general I would recommend choosing whichever device you would
 sensibly think of as a master. In the camera case I would choose the
 camera controller node instead of the camera itself, and in the
 display case I would choose the display controller instead of the
 panel. The binding author needs to choose what she things makes the
 most sense, but drivers can still use if it it turns out to be
 'backwards'

 I would perhaps choose the same approach, but at the same time I think
 it's all but clear. The display controller doesn't control the panel any
 more than a DMA controller controls, say, the display controller.

 In fact, in earlier versions of OMAP DSS DT support I had a simpler port
 description, and in that I had the panel as the master (i.e. link from
 panel to dispc) because the panel driver uses the display controller's
 features to provide the panel device a data stream.

 And even with the current OMAP DSS DT version, which uses the v4l2 style
 ports/endpoints, the driver model is still the same, and only links
 towards upstream are used.

 So one reason I'm happy with the dual-linking is that I can easily
 follow the links from the downstream entities to upstream entities, and
 other people, who have different driver model, can easily do the
 opposite.

 But I agree that single-linking is enough and this can be handled at
 runtime, even if it makes the code more complex. And perhaps requires
 extra data in the dts, to give the start points for the graph.

 In theory unidirectional links in DT are indeed enough. However, let's not
 forget the following.

 - There's no such thing as single start points for graphs. Sure, in some
 simple cases the graph will have a single start point, but that's not a
 generic rule. For instance the camera graphs
 http://ideasonboard.org/media/omap3isp.ps and
 http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus
 two starting points from a data flow point of view. And if you want a
 better understanding of how complex media graphs can become, have a look
 at http://ideasonboard.org/media/vsp1.0.pdf (that's a real world example,
 albeit all connections are internal to the SoC in that particular case,
 and don't need to be described in DT).

 - There's also no such thing as a master device that can just point to
 slave devices. Once again simple cases exist where that model could work,
 but real world examples exist of complex pipelines with dozens of
 elements all implemented by a separate IP core and handled by separate
 drivers, forming a graph with long chains and branches. We thus need real
 graph bindings.

 - Finally, having no backlinks in DT would make the software
 implementation very complex. We need to be able to walk the graph in a
 generic way without having any of the IP core drivers loaded, and without
 any specific starting point. We would thus need to parse the complete DT
 tree, looking at all nodes and trying to find out whether they're part of
 the graph we're trying to walk. The complexity of the operation would be
 at best quadratic to the number of nodes in the whole DT and to the number
 of nodes in the graph.

 Not really. To being with, you cannot determine any meaning of a node
 across the tree (aside from it being an endpoint)
 
 That's the important part. I can assume the target node of the 
 remote-endpoint 
 phandle to be an endpoint, and can thus assume that it implements the 
 of-graph 
 bindings. That's all I need to be able to walk the graph in a generic way.
 
 without also understanding the binding that the node is a part of. That
 means you need to have something matching against the compatible string on
 both ends of the linkage. For instance:

 panel {
  compatible = acme,lvds-panel;
  lvds-port: port {
  };
 };

 display-controller {
  compatible = encom,video;
  port {
  remote-endpoint = lvds-port;
  };
 };

 In the above example, the encom,video driver has absolutely zero
 information about what the acme,lvds-panel binding actually implements.
 There needs to be both a driver for the acme,lvds-panel binding and
 one for the encom,video binding (even if the acme,lvds-panel binding
 is very thin and defers the functionality to the video controller).
 
 I absolutely agree with that. We need a driver for each device (in this case 
 the acme panel and the encom display controller), and we need those drivers 
 to 
 register entities (in the generic sense of the term) for them to be able to 
 communicate with each other. The 

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-11 Thread Laurent Pinchart
Hi Tomi,

On Tuesday 11 March 2014 14:59:20 Tomi Valkeinen wrote:
 On 11/03/14 13:43, Laurent Pinchart wrote:
  We could scan the whole tree for entities, ports and endpoints once, in
  the base oftree code, and put that into a graph structure, adding the
  backlinks. The of_graph_* helpers could then use that graph instead of
  the device tree.
  
  That could work. The complexity would still be quadratic, but we would
  parse the full device tree once only.
  
  The runtime complexity would still be increased, as the graph helpers
  would need to find the endpoint object in the parsed graph corresponding
  to the DT node they get as an argument. That's proportional to the number
  of graph elements, not the total number of DT nodes, so I suppose it's not
  too bad.
  
  We also need to make sure this would work with insertion of DT fragments
  at runtime. Probably not a big deal, but it has to be kept in mind.
 
 About the endpoint linking direction... As I think was suggested, the
 base logic would be to make endpoints point outward from the SoC, i.e.
 a display controller would point to a panel, and a capture controller
 would point to a sensor.

 But how about this case:
 
 I have a simple video pipeline with a display controller, an encoder and
 a panel, as follows:
 
 dispc - encoder - panel
 
 Here the arrows show which way the remote-endpoint links point. So
 looking at the encoder, the encoder's input port is pointed at by the
 dispc, and the encoder's output port points at the panel.
 
 Then, I have a capture pipeline, with a capture controller, an encoder
 (the same one that was used for display above) and a sensor, as follows:
 
 camc - encoder - sensor
 
 Again the arrows show the links. Note that here the encoder's _output_
 port is pointed at by the camc, and the encoder's _input_ port points at
 the sensor.
 
 So depending on the use case, the endpoints would point to opposite
 direction from the encoder's point of view.
 
 And if I gathered Grant's opinion correctly (correct me if I'm wrong),
 he thinks things should be explicit, i.e. the bindings for, say, an
 encoder should state that the encoder's output endpoint _must_ contain a
 remote-endpoint property, whereas the encoder's input endpoint _must
 not_ contain a remote-endpoint property.

Actually my understand was that DT links would have the same direction as the 
data flow. There would be no ambiguity in that case as the direction of the 
data flow is known. What happens for bidirectional data flows still need to be 
discussed though. And if we want to use the of-graph bindings to describe 
graphs without a data flow, a decision will need to be taken there too.

-- 
Regards,

Laurent Pinchart


signature.asc
Description: This is a digitally signed message part.


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-11 Thread Tomi Valkeinen
On 11/03/14 15:16, Laurent Pinchart wrote:

 And if I gathered Grant's opinion correctly (correct me if I'm wrong),
 he thinks things should be explicit, i.e. the bindings for, say, an
 encoder should state that the encoder's output endpoint _must_ contain a
 remote-endpoint property, whereas the encoder's input endpoint _must
 not_ contain a remote-endpoint property.
 
 Actually my understand was that DT links would have the same direction as the 
 data flow. There would be no ambiguity in that case as the direction of the 

Ok. At least at some point in the discussion the rule of thumb was to
have the master point at the slave, which are a bit vague terms, but
I think both display and camera controllers were given as examples of
master.

 data flow is known. What happens for bidirectional data flows still need to 
 be 
 discussed though. And if we want to use the of-graph bindings to describe 
 graphs without a data flow, a decision will need to be taken there too.

Yep. My worry is that if the link direction is defined in the bindings
for the device, somebody will at some point have a complex board which
happens to use two devices in such a way, that either neither of them
point to each other, or both point to each other.

Maybe we can make sure that never happens, but I feel a bit nervous
about it especially for bi-directional cases. If the link has no clear
main-dataflow direction, it may be a bit difficult to say which way to link.

With double-linking all those concerns just disappear.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-11 Thread Philipp Zabel
Hi,

Am Dienstag, den 11.03.2014, 15:27 +0200 schrieb Tomi Valkeinen:
 On 11/03/14 15:16, Laurent Pinchart wrote:
 
  And if I gathered Grant's opinion correctly (correct me if I'm wrong),
  he thinks things should be explicit, i.e. the bindings for, say, an
  encoder should state that the encoder's output endpoint _must_ contain a
  remote-endpoint property, whereas the encoder's input endpoint _must
  not_ contain a remote-endpoint property.
  
  Actually my understand was that DT links would have the same direction as 
  the 
  data flow. There would be no ambiguity in that case as the direction of the 
 
 Ok. At least at some point in the discussion the rule of thumb was to
 have the master point at the slave, which are a bit vague terms, but
 I think both display and camera controllers were given as examples of
 master.
 
  data flow is known. What happens for bidirectional data flows still need to 
  be 
  discussed though. And if we want to use the of-graph bindings to describe 
  graphs without a data flow, a decision will need to be taken there too.
 
 Yep. My worry is that if the link direction is defined in the bindings
 for the device, somebody will at some point have a complex board which
 happens to use two devices in such a way, that either neither of them
 point to each other, or both point to each other.
 
 Maybe we can make sure that never happens, but I feel a bit nervous
 about it especially for bi-directional cases. If the link has no clear
 main-dataflow direction, it may be a bit difficult to say which way to link.
 
 With double-linking all those concerns just disappear.

another possibility would be to just leave it open in the generic
bindings:

Two endpoints are considered linked together if any of them contains a
 'remote-endpoint' phandle property that points to the other. If both
 contain a 'remote-endpoint' property, both must point at each other.

As long as we make sure that the generic code is able to generate the
undirected graph from this, we could then let more specific bindings
(like video-interfaces.txt) define that the phandle links always should
point in both directions, in data flow direction, north, or outwards
from the 'master'.

regards
Philipp

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-11 Thread Philipp Zabel
Hi Grant,

Am Montag, den 10.03.2014, 14:58 + schrieb Grant Likely:
 On Mon, 10 Mar 2014 14:52:53 +0100, Laurent Pinchart 
 laurent.pinch...@ideasonboard.com wrote:
  On Monday 10 March 2014 12:18:20 Tomi Valkeinen wrote:
   On 08/03/14 13:41, Grant Likely wrote:
Ok. If we go for single directional link, the question is then: which
way? And is the direction different for display and camera, which are
kind of reflections of each other?

In general I would recommend choosing whichever device you would
sensibly think of as a master. In the camera case I would choose the
camera controller node instead of the camera itself, and in the display
case I would choose the display controller instead of the panel. The
binding author needs to choose what she things makes the most sense, but
drivers can still use if it it turns out to be 'backwards'
   
   I would perhaps choose the same approach, but at the same time I think
   it's all but clear. The display controller doesn't control the panel any
   more than a DMA controller controls, say, the display controller.
   
   In fact, in earlier versions of OMAP DSS DT support I had a simpler port
   description, and in that I had the panel as the master (i.e. link from
   panel to dispc) because the panel driver uses the display controller's
   features to provide the panel device a data stream.
   
   And even with the current OMAP DSS DT version, which uses the v4l2 style
   ports/endpoints, the driver model is still the same, and only links
   towards upstream are used.
   
   So one reason I'm happy with the dual-linking is that I can easily
   follow the links from the downstream entities to upstream entities, and
   other people, who have different driver model, can easily do the opposite.
   
   But I agree that single-linking is enough and this can be handled at
   runtime, even if it makes the code more complex. And perhaps requires
   extra data in the dts, to give the start points for the graph.
  
  In theory unidirectional links in DT are indeed enough. However, let's not 
  forget the following.
  
  - There's no such thing as single start points for graphs. Sure, in some 
  simple cases the graph will have a single start point, but that's not a 
  generic rule. For instance the camera graphs 
  http://ideasonboard.org/media/omap3isp.ps and 
  http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus 
  two 
  starting points from a data flow point of view. And if you want a better 
  understanding of how complex media graphs can become, have a look at 
  http://ideasonboard.org/media/vsp1.0.pdf (that's a real world example, 
  albeit 
  all connections are internal to the SoC in that particular case, and don't 
  need to be described in DT).
  
  - There's also no such thing as a master device that can just point to 
  slave 
  devices. Once again simple cases exist where that model could work, but 
  real 
  world examples exist of complex pipelines with dozens of elements all 
  implemented by a separate IP core and handled by separate drivers, forming 
  a 
  graph with long chains and branches. We thus need real graph bindings.
  
  - Finally, having no backlinks in DT would make the software implementation 
  very complex. We need to be able to walk the graph in a generic way without 
  having any of the IP core drivers loaded, and without any specific starting 
  point. We would thus need to parse the complete DT tree, looking at all 
  nodes 
  and trying to find out whether they're part of the graph we're trying to 
  walk. 
  The complexity of the operation would be at best quadratic to the number of 
  nodes in the whole DT and to the number of nodes in the graph.
 
 Not really. To being with, you cannot determine any meaning of a node
 across the tree (aside from it being an endpoint) without also
 understanding the binding that the node is a part of. That means you
 need to have something matching against the compatible string on both
 ends of the linkage. For instance:
 
 panel {
   compatible = acme,lvds-panel;
   lvds-port: port {
   };
 };
 
 display-controller {
   compatible = encom,video;
   port {
   remote-endpoint = lvds-port;
   };
 };
 
 In the above example, the encom,video driver has absolutely zero
 information about what the acme,lvds-panel binding actually implements.
 There needs to be both a driver for the acme,lvds-panel binding and
 one for the encom,video binding (even if the acme,lvds-panel binding
 is very thin and defers the functionality to the video controller).
 
 What you want here is the drivers to register each side of the
 connection. That could be modeled with something like the following
 (pseudocode):
 
 struct of_endpoint {
   struct list_head list;
   struct device_node *ep_node;
   void *context;
   void (*cb)(struct of_endpoint *ep, void *data);
 }
 
 int of_register_port(struct device *node, 

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-11 Thread Laurent Pinchart
Hi Philipp,

On Tuesday 11 March 2014 16:07:00 Philipp Zabel wrote:
 Am Montag, den 10.03.2014, 14:58 + schrieb Grant Likely:
  On Mon, 10 Mar 2014 14:52:53 +0100, Laurent Pinchart wrote:

[snip]

   In theory unidirectional links in DT are indeed enough. However, let's
   not forget the following.
   
   - There's no such thing as single start points for graphs. Sure, in some
   simple cases the graph will have a single start point, but that's not a
   generic rule. For instance the camera graphs
   http://ideasonboard.org/media/omap3isp.ps and
   http://ideasonboard.org/media/eyecam.ps have two camera sensors, and
   thus two starting points from a data flow point of view. And if you
   want a better understanding of how complex media graphs can become,
   have a look at http://ideasonboard.org/media/vsp1.0.pdf (that's a real
   world example, albeit all connections are internal to the SoC in that
   particular case, and don't need to be described in DT).
   
   - There's also no such thing as a master device that can just point to
   slave devices. Once again simple cases exist where that model could
   work, but real world examples exist of complex pipelines with dozens of
   elements all implemented by a separate IP core and handled by separate
   drivers, forming a graph with long chains and branches. We thus need
   real graph bindings.
   
   - Finally, having no backlinks in DT would make the software
   implementation very complex. We need to be able to walk the graph in a
   generic way without having any of the IP core drivers loaded, and
   without any specific starting point. We would thus need to parse the
   complete DT tree, looking at all nodes and trying to find out whether
   they're part of the graph we're trying to walk. The complexity of the
   operation would be at best quadratic to the number of nodes in the whole
   DT and to the number of nodes in the graph.
  
  Not really. To being with, you cannot determine any meaning of a node
  across the tree (aside from it being an endpoint) without also
  understanding the binding that the node is a part of. That means you
  need to have something matching against the compatible string on both
  ends of the linkage. For instance:
  
  panel {
  compatible = acme,lvds-panel;
  lvds-port: port {
  };
  };
  
  display-controller {
  compatible = encom,video;
  port {
  remote-endpoint = lvds-port;
  };
  };
  
  In the above example, the encom,video driver has absolutely zero
  information about what the acme,lvds-panel binding actually implements.
  There needs to be both a driver for the acme,lvds-panel binding and
  one for the encom,video binding (even if the acme,lvds-panel binding
  is very thin and defers the functionality to the video controller).
  
  What you want here is the drivers to register each side of the
  connection. That could be modeled with something like the following
  (pseudocode):
  
  struct of_endpoint {
  
  struct list_head list;
  struct device_node *ep_node;
  void *context;
  void (*cb)(struct of_endpoint *ep, void *data);
  
  }
  
  int of_register_port(struct device *node, void (*cb)(struct of_endpoint
  *ep, void *data), void *data) {
  
  struct of_endpoint *ep = kzalloc(sizeof(*ep), GFP_KERNEL);
  
  ep-ep_node = node;
  ep-data = data;
  ep-callback = cb;
  
  /* store the endpoint to a list */
  /* check if the endpoint has a remote-endpoint link */
  /* If so, then link the two together and call the
   * callbacks */
  }
  
  That's neither expensive or complicated.
  
  Originally I suggested walking the whole tree multiple times, but as
  mentioned that doesn't scale, and as I thought about the above it isn't
  even a valid thing to do. Everything has to be driven by drivers, so
  even if the backlinks are there, nothing can be done with the link until
  the other side goes through enumeration independently.
 
 I have implemented your suggestion as follows. Basically, this allows
 either endpoint to contain the remote-endpoint link, as long as all
 drivers register their endpoints in the probe function and return
 -EPROBE_DEFER from their component framework bind callback until all
 their endpoints are connected.

Beside bringing the whole graph down when a single component can't be probed 
(either because the corresponding hardware devices is missing, broken, or the 
driver isn't loaded), that's adding even one more level of complexity with an 
additional callback. I'm afraid I can't accept it as-is, the result is just 
too complex for device drivers and not flexible enough.

I want to keep the ability to walk the graph without requiring all components 
to be probed by their respective driver. What happened to your suggestion of 
parsing the whole DT once at boot time ?

 From fdda1fb2bd133200d4620adcbb28697cb360e1cb Mon Sep 17 00:00:00 2001
 From: Philipp Zabel p.za...@pengutronix.de

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-11 Thread Philipp Zabel
Hi Laurent,

Am Dienstag, den 11.03.2014, 16:21 +0100 schrieb Laurent Pinchart:
 Hi Philipp,
 
 On Tuesday 11 March 2014 16:07:00 Philipp Zabel wrote:
  Am Montag, den 10.03.2014, 14:58 + schrieb Grant Likely:
   On Mon, 10 Mar 2014 14:52:53 +0100, Laurent Pinchart wrote:
 
 [snip]
 
In theory unidirectional links in DT are indeed enough. However, let's
not forget the following.

- There's no such thing as single start points for graphs. Sure, in some
simple cases the graph will have a single start point, but that's not a
generic rule. For instance the camera graphs
http://ideasonboard.org/media/omap3isp.ps and
http://ideasonboard.org/media/eyecam.ps have two camera sensors, and
thus two starting points from a data flow point of view. And if you
want a better understanding of how complex media graphs can become,
have a look at http://ideasonboard.org/media/vsp1.0.pdf (that's a real
world example, albeit all connections are internal to the SoC in that
particular case, and don't need to be described in DT).

- There's also no such thing as a master device that can just point to
slave devices. Once again simple cases exist where that model could
work, but real world examples exist of complex pipelines with dozens of
elements all implemented by a separate IP core and handled by separate
drivers, forming a graph with long chains and branches. We thus need
real graph bindings.

- Finally, having no backlinks in DT would make the software
implementation very complex. We need to be able to walk the graph in a
generic way without having any of the IP core drivers loaded, and
without any specific starting point. We would thus need to parse the
complete DT tree, looking at all nodes and trying to find out whether
they're part of the graph we're trying to walk. The complexity of the
operation would be at best quadratic to the number of nodes in the whole
DT and to the number of nodes in the graph.
   
   Not really. To being with, you cannot determine any meaning of a node
   across the tree (aside from it being an endpoint) without also
   understanding the binding that the node is a part of. That means you
   need to have something matching against the compatible string on both
   ends of the linkage. For instance:
   
   panel {
 compatible = acme,lvds-panel;
 lvds-port: port {
 };
   };
   
   display-controller {
 compatible = encom,video;
 port {
 remote-endpoint = lvds-port;
 };
   };
   
   In the above example, the encom,video driver has absolutely zero
   information about what the acme,lvds-panel binding actually implements.
   There needs to be both a driver for the acme,lvds-panel binding and
   one for the encom,video binding (even if the acme,lvds-panel binding
   is very thin and defers the functionality to the video controller).
   
   What you want here is the drivers to register each side of the
   connection. That could be modeled with something like the following
   (pseudocode):
   
   struct of_endpoint {
   
 struct list_head list;
 struct device_node *ep_node;
 void *context;
 void (*cb)(struct of_endpoint *ep, void *data);
   
   }
   
   int of_register_port(struct device *node, void (*cb)(struct of_endpoint
   *ep, void *data), void *data) {
   
 struct of_endpoint *ep = kzalloc(sizeof(*ep), GFP_KERNEL);
 
 ep-ep_node = node;
 ep-data = data;
 ep-callback = cb;
 
 /* store the endpoint to a list */
 /* check if the endpoint has a remote-endpoint link */
 /* If so, then link the two together and call the
  * callbacks */
   }
   
   That's neither expensive or complicated.
   
   Originally I suggested walking the whole tree multiple times, but as
   mentioned that doesn't scale, and as I thought about the above it isn't
   even a valid thing to do. Everything has to be driven by drivers, so
   even if the backlinks are there, nothing can be done with the link until
   the other side goes through enumeration independently.
  
  I have implemented your suggestion as follows. Basically, this allows
  either endpoint to contain the remote-endpoint link, as long as all
  drivers register their endpoints in the probe function and return
  -EPROBE_DEFER from their component framework bind callback until all
  their endpoints are connected.
 
 Beside bringing the whole graph down when a single component can't be probed 
 (either because the corresponding hardware devices is missing, broken, or the 
 driver isn't loaded), that's adding even one more level of complexity with an 
 additional callback.

That callback could be completely optional. But you are right that with
this model, each device in the graph can't look beyond the directly
connected devices (easily). On the other hand, I'm not quite sure how
useful this really is given that you can't know how a device's ports

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-10 Thread Philipp Zabel
Am Montag, den 10.03.2014, 16:15 +0100 schrieb Laurent Pinchart:
> Hi Grant,
> 
> On Monday 10 March 2014 14:58:15 Grant Likely wrote:
> > On Mon, 10 Mar 2014 14:52:53 +0100, Laurent Pinchart wrote:
> > > On Monday 10 March 2014 12:18:20 Tomi Valkeinen wrote:
> > > > On 08/03/14 13:41, Grant Likely wrote:
> > > > >> Ok. If we go for single directional link, the question is then: which
> > > > >> way? And is the direction different for display and camera, which are
> > > > >> kind of reflections of each other?
> > > > > 
> > > > > In general I would recommend choosing whichever device you would
> > > > > sensibly think of as a master. In the camera case I would choose the
> > > > > camera controller node instead of the camera itself, and in the
> > > > > display case I would choose the display controller instead of the
> > > > > panel. The binding author needs to choose what she things makes the
> > > > > most sense, but drivers can still use if it it turns out to be
> > > > > 'backwards'
> > > > 
> > > > I would perhaps choose the same approach, but at the same time I think
> > > > it's all but clear. The display controller doesn't control the panel any
> > > > more than a DMA controller controls, say, the display controller.
> > > > 
> > > > In fact, in earlier versions of OMAP DSS DT support I had a simpler port
> > > > description, and in that I had the panel as the master (i.e. link from
> > > > panel to dispc) because the panel driver uses the display controller's
> > > > features to provide the panel device a data stream.
> > > > 
> > > > And even with the current OMAP DSS DT version, which uses the v4l2 style
> > > > ports/endpoints, the driver model is still the same, and only links
> > > > towards upstream are used.
> > > > 
> > > > So one reason I'm happy with the dual-linking is that I can easily
> > > > follow the links from the downstream entities to upstream entities, and
> > > > other people, who have different driver model, can easily do the
> > > > opposite.
> > > > 
> > > > But I agree that single-linking is enough and this can be handled at
> > > > runtime, even if it makes the code more complex. And perhaps requires
> > > > extra data in the dts, to give the start points for the graph.
> > > 
> > > In theory unidirectional links in DT are indeed enough. However, let's not
> > > forget the following.
> > > 
> > > - There's no such thing as single start points for graphs. Sure, in some
> > > simple cases the graph will have a single start point, but that's not a
> > > generic rule. For instance the camera graphs
> > > http://ideasonboard.org/media/omap3isp.ps and
> > > http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus
> > > two starting points from a data flow point of view. And if you want a
> > > better understanding of how complex media graphs can become, have a look
> > > at http://ideasonboard.org/media/vsp1.0.pdf (that's a real world example,
> > > albeit all connections are internal to the SoC in that particular case,
> > > and don't need to be described in DT).
> > > 
> > > - There's also no such thing as a master device that can just point to
> > > slave devices. Once again simple cases exist where that model could work,
> > > but real world examples exist of complex pipelines with dozens of
> > > elements all implemented by a separate IP core and handled by separate
> > > drivers, forming a graph with long chains and branches. We thus need real
> > > graph bindings.
> > > 
> > > - Finally, having no backlinks in DT would make the software
> > > implementation very complex. We need to be able to walk the graph in a
> > > generic way without having any of the IP core drivers loaded, and without
> > > any specific starting point. We would thus need to parse the complete DT
> > > tree, looking at all nodes and trying to find out whether they're part of
> > > the graph we're trying to walk. The complexity of the operation would be
> > > at best quadratic to the number of nodes in the whole DT and to the number
> > > of nodes in the graph.
> > 
> > Not really. To being with, you cannot determine any meaning of a node
> > across the tree (aside from it being an endpoint)
> 
> That's the important part. I can assume the target node of the 
> remote-endpoint 
> phandle to be an endpoint, and can thus assume that it implements the 
> of-graph 
> bindings. That's all I need to be able to walk the graph in a generic way.
> 
> > without also understanding the binding that the node is a part of. That
> > means you need to have something matching against the compatible string on
> > both ends of the linkage. For instance:
> > 
> > panel {
> > compatible = "acme,lvds-panel";
> > lvds-port: port {
> > };
> > };
> > 
> > display-controller {
> > compatible = "encom,video";
> > port {
> > remote-endpoint = <>;
> > };
> > };
> > 
> > In the above example, the encom,video driver has absolutely zero
> > information about what the 

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-10 Thread Laurent Pinchart
Hi Grant,

On Monday 10 March 2014 14:58:15 Grant Likely wrote:
> On Mon, 10 Mar 2014 14:52:53 +0100, Laurent Pinchart wrote:
> > On Monday 10 March 2014 12:18:20 Tomi Valkeinen wrote:
> > > On 08/03/14 13:41, Grant Likely wrote:
> > > >> Ok. If we go for single directional link, the question is then: which
> > > >> way? And is the direction different for display and camera, which are
> > > >> kind of reflections of each other?
> > > > 
> > > > In general I would recommend choosing whichever device you would
> > > > sensibly think of as a master. In the camera case I would choose the
> > > > camera controller node instead of the camera itself, and in the
> > > > display case I would choose the display controller instead of the
> > > > panel. The binding author needs to choose what she things makes the
> > > > most sense, but drivers can still use if it it turns out to be
> > > > 'backwards'
> > > 
> > > I would perhaps choose the same approach, but at the same time I think
> > > it's all but clear. The display controller doesn't control the panel any
> > > more than a DMA controller controls, say, the display controller.
> > > 
> > > In fact, in earlier versions of OMAP DSS DT support I had a simpler port
> > > description, and in that I had the panel as the master (i.e. link from
> > > panel to dispc) because the panel driver uses the display controller's
> > > features to provide the panel device a data stream.
> > > 
> > > And even with the current OMAP DSS DT version, which uses the v4l2 style
> > > ports/endpoints, the driver model is still the same, and only links
> > > towards upstream are used.
> > > 
> > > So one reason I'm happy with the dual-linking is that I can easily
> > > follow the links from the downstream entities to upstream entities, and
> > > other people, who have different driver model, can easily do the
> > > opposite.
> > > 
> > > But I agree that single-linking is enough and this can be handled at
> > > runtime, even if it makes the code more complex. And perhaps requires
> > > extra data in the dts, to give the start points for the graph.
> > 
> > In theory unidirectional links in DT are indeed enough. However, let's not
> > forget the following.
> > 
> > - There's no such thing as single start points for graphs. Sure, in some
> > simple cases the graph will have a single start point, but that's not a
> > generic rule. For instance the camera graphs
> > http://ideasonboard.org/media/omap3isp.ps and
> > http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus
> > two starting points from a data flow point of view. And if you want a
> > better understanding of how complex media graphs can become, have a look
> > at http://ideasonboard.org/media/vsp1.0.pdf (that's a real world example,
> > albeit all connections are internal to the SoC in that particular case,
> > and don't need to be described in DT).
> > 
> > - There's also no such thing as a master device that can just point to
> > slave devices. Once again simple cases exist where that model could work,
> > but real world examples exist of complex pipelines with dozens of
> > elements all implemented by a separate IP core and handled by separate
> > drivers, forming a graph with long chains and branches. We thus need real
> > graph bindings.
> > 
> > - Finally, having no backlinks in DT would make the software
> > implementation very complex. We need to be able to walk the graph in a
> > generic way without having any of the IP core drivers loaded, and without
> > any specific starting point. We would thus need to parse the complete DT
> > tree, looking at all nodes and trying to find out whether they're part of
> > the graph we're trying to walk. The complexity of the operation would be
> > at best quadratic to the number of nodes in the whole DT and to the number
> > of nodes in the graph.
> 
> Not really. To being with, you cannot determine any meaning of a node
> across the tree (aside from it being an endpoint)

That's the important part. I can assume the target node of the remote-endpoint 
phandle to be an endpoint, and can thus assume that it implements the of-graph 
bindings. That's all I need to be able to walk the graph in a generic way.

> without also understanding the binding that the node is a part of. That
> means you need to have something matching against the compatible string on
> both ends of the linkage. For instance:
> 
> panel {
>   compatible = "acme,lvds-panel";
>   lvds-port: port {
>   };
> };
> 
> display-controller {
>   compatible = "encom,video";
>   port {
>   remote-endpoint = <>;
>   };
> };
> 
> In the above example, the encom,video driver has absolutely zero
> information about what the acme,lvds-panel binding actually implements.
> There needs to be both a driver for the "acme,lvds-panel" binding and
> one for the "encom,video" binding (even if the acme,lvds-panel binding
> is very thin and defers the functionality to the video controller).

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-10 Thread Grant Likely
On Mon, 10 Mar 2014 14:52:53 +0100, Laurent Pinchart 
 wrote:
> On Monday 10 March 2014 12:18:20 Tomi Valkeinen wrote:
> > On 08/03/14 13:41, Grant Likely wrote:
> > >> Ok. If we go for single directional link, the question is then: which
> > >> way? And is the direction different for display and camera, which are
> > >> kind of reflections of each other?
> > > 
> > > In general I would recommend choosing whichever device you would
> > > sensibly think of as a master. In the camera case I would choose the
> > > camera controller node instead of the camera itself, and in the display
> > > case I would choose the display controller instead of the panel. The
> > > binding author needs to choose what she things makes the most sense, but
> > > drivers can still use if it it turns out to be 'backwards'
> > 
> > I would perhaps choose the same approach, but at the same time I think
> > it's all but clear. The display controller doesn't control the panel any
> > more than a DMA controller controls, say, the display controller.
> > 
> > In fact, in earlier versions of OMAP DSS DT support I had a simpler port
> > description, and in that I had the panel as the master (i.e. link from
> > panel to dispc) because the panel driver uses the display controller's
> > features to provide the panel device a data stream.
> > 
> > And even with the current OMAP DSS DT version, which uses the v4l2 style
> > ports/endpoints, the driver model is still the same, and only links
> > towards upstream are used.
> > 
> > So one reason I'm happy with the dual-linking is that I can easily
> > follow the links from the downstream entities to upstream entities, and
> > other people, who have different driver model, can easily do the opposite.
> > 
> > But I agree that single-linking is enough and this can be handled at
> > runtime, even if it makes the code more complex. And perhaps requires
> > extra data in the dts, to give the start points for the graph.
> 
> In theory unidirectional links in DT are indeed enough. However, let's not 
> forget the following.
> 
> - There's no such thing as single start points for graphs. Sure, in some 
> simple cases the graph will have a single start point, but that's not a 
> generic rule. For instance the camera graphs 
> http://ideasonboard.org/media/omap3isp.ps and 
> http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus two 
> starting points from a data flow point of view. And if you want a better 
> understanding of how complex media graphs can become, have a look at 
> http://ideasonboard.org/media/vsp1.0.pdf (that's a real world example, albeit 
> all connections are internal to the SoC in that particular case, and don't 
> need to be described in DT).
> 
> - There's also no such thing as a master device that can just point to slave 
> devices. Once again simple cases exist where that model could work, but real 
> world examples exist of complex pipelines with dozens of elements all 
> implemented by a separate IP core and handled by separate drivers, forming a 
> graph with long chains and branches. We thus need real graph bindings.
> 
> - Finally, having no backlinks in DT would make the software implementation 
> very complex. We need to be able to walk the graph in a generic way without 
> having any of the IP core drivers loaded, and without any specific starting 
> point. We would thus need to parse the complete DT tree, looking at all nodes 
> and trying to find out whether they're part of the graph we're trying to 
> walk. 
> The complexity of the operation would be at best quadratic to the number of 
> nodes in the whole DT and to the number of nodes in the graph.

Not really. To being with, you cannot determine any meaning of a node
across the tree (aside from it being an endpoint) without also
understanding the binding that the node is a part of. That means you
need to have something matching against the compatible string on both
ends of the linkage. For instance:

panel {
compatible = "acme,lvds-panel";
lvds-port: port {
};
};

display-controller {
compatible = "encom,video";
port {
remote-endpoint = <>;
};
};

In the above example, the encom,video driver has absolutely zero
information about what the acme,lvds-panel binding actually implements.
There needs to be both a driver for the "acme,lvds-panel" binding and
one for the "encom,video" binding (even if the acme,lvds-panel binding
is very thin and defers the functionality to the video controller).

What you want here is the drivers to register each side of the
connection. That could be modeled with something like the following
(pseudocode):

struct of_endpoint {
struct list_head list;
struct device_node *ep_node;
void *context;
void (*cb)(struct of_endpoint *ep, void *data);
}

int of_register_port(struct device *node, void (*cb)(struct of_endpoint *ep, 
void *data), void *data)
{
struct of_endpoint *ep = 

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-10 Thread Grant Likely
On Mon, 10 Mar 2014 12:18:20 +0200, Tomi Valkeinen  
wrote:
> On 08/03/14 13:41, Grant Likely wrote:
> 
> >> Ok. If we go for single directional link, the question is then: which
> >> way? And is the direction different for display and camera, which are
> >> kind of reflections of each other?
> > 
> > In general I would recommend choosing whichever device you would
> > sensibly think of as a master. In the camera case I would choose the
> > camera controller node instead of the camera itself, and in the display
> > case I would choose the display controller instead of the panel. The
> > binding author needs to choose what she things makes the most sense, but
> > drivers can still use if it it turns out to be 'backwards'
> 
> I would perhaps choose the same approach, but at the same time I think
> it's all but clear. The display controller doesn't control the panel any
> more than a DMA controller controls, say, the display controller.

In a sense it doesn't actually matter. You sensibly want to choose the
most likely direction that drivers would go looking for a device it
depends on, but it can be matched up at runtime regardless of the
direction chosen by the binding.

g.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-10 Thread Tomi Valkeinen
On 10/03/14 15:52, Laurent Pinchart wrote:

> In theory unidirectional links in DT are indeed enough. However, let's not 
> forget the following.
> 
> - There's no such thing as single start points for graphs. Sure, in some 
> simple cases the graph will have a single start point, but that's not a 
> generic rule. For instance the camera graphs 
> http://ideasonboard.org/media/omap3isp.ps and 
> http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus two 
> starting points from a data flow point of view. And if you want a better 
> understanding of how complex media graphs can become, have a look at 
> http://ideasonboard.org/media/vsp1.0.pdf (that's a real world example, albeit 
> all connections are internal to the SoC in that particular case, and don't 
> need to be described in DT).
> 
> - There's also no such thing as a master device that can just point to slave 
> devices. Once again simple cases exist where that model could work, but real 
> world examples exist of complex pipelines with dozens of elements all 
> implemented by a separate IP core and handled by separate drivers, forming a 
> graph with long chains and branches. We thus need real graph bindings.
> 
> - Finally, having no backlinks in DT would make the software implementation 
> very complex. We need to be able to walk the graph in a generic way without 
> having any of the IP core drivers loaded, and without any specific starting 
> point. We would thus need to parse the complete DT tree, looking at all nodes 
> and trying to find out whether they're part of the graph we're trying to 
> walk. 
> The complexity of the operation would be at best quadratic to the number of 
> nodes in the whole DT and to the number of nodes in the graph.

I did use plural when I said "to give the start points...".

If you have a list of starting points in the DT, a "graph helper" or
something could create a runtime representation of the graph at some
early phase during the boot, which would include backlinks. The
individual drivers could use that runtime graph, instead of the DT graph.

But it still sounds considerably more complex than double-links in DT.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-10 Thread Laurent Pinchart
On Monday 10 March 2014 12:18:20 Tomi Valkeinen wrote:
> On 08/03/14 13:41, Grant Likely wrote:
> >> Ok. If we go for single directional link, the question is then: which
> >> way? And is the direction different for display and camera, which are
> >> kind of reflections of each other?
> > 
> > In general I would recommend choosing whichever device you would
> > sensibly think of as a master. In the camera case I would choose the
> > camera controller node instead of the camera itself, and in the display
> > case I would choose the display controller instead of the panel. The
> > binding author needs to choose what she things makes the most sense, but
> > drivers can still use if it it turns out to be 'backwards'
> 
> I would perhaps choose the same approach, but at the same time I think
> it's all but clear. The display controller doesn't control the panel any
> more than a DMA controller controls, say, the display controller.
> 
> In fact, in earlier versions of OMAP DSS DT support I had a simpler port
> description, and in that I had the panel as the master (i.e. link from
> panel to dispc) because the panel driver uses the display controller's
> features to provide the panel device a data stream.
> 
> And even with the current OMAP DSS DT version, which uses the v4l2 style
> ports/endpoints, the driver model is still the same, and only links
> towards upstream are used.
> 
> So one reason I'm happy with the dual-linking is that I can easily
> follow the links from the downstream entities to upstream entities, and
> other people, who have different driver model, can easily do the opposite.
> 
> But I agree that single-linking is enough and this can be handled at
> runtime, even if it makes the code more complex. And perhaps requires
> extra data in the dts, to give the start points for the graph.

In theory unidirectional links in DT are indeed enough. However, let's not 
forget the following.

- There's no such thing as single start points for graphs. Sure, in some 
simple cases the graph will have a single start point, but that's not a 
generic rule. For instance the camera graphs 
http://ideasonboard.org/media/omap3isp.ps and 
http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus two 
starting points from a data flow point of view. And if you want a better 
understanding of how complex media graphs can become, have a look at 
http://ideasonboard.org/media/vsp1.0.pdf (that's a real world example, albeit 
all connections are internal to the SoC in that particular case, and don't 
need to be described in DT).

- There's also no such thing as a master device that can just point to slave 
devices. Once again simple cases exist where that model could work, but real 
world examples exist of complex pipelines with dozens of elements all 
implemented by a separate IP core and handled by separate drivers, forming a 
graph with long chains and branches. We thus need real graph bindings.

- Finally, having no backlinks in DT would make the software implementation 
very complex. We need to be able to walk the graph in a generic way without 
having any of the IP core drivers loaded, and without any specific starting 
point. We would thus need to parse the complete DT tree, looking at all nodes 
and trying to find out whether they're part of the graph we're trying to walk. 
The complexity of the operation would be at best quadratic to the number of 
nodes in the whole DT and to the number of nodes in the graph.

-- 
Regards,

Laurent Pinchart


signature.asc
Description: This is a digitally signed message part.


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-10 Thread Tomi Valkeinen
On 08/03/14 13:41, Grant Likely wrote:

>> Ok. If we go for single directional link, the question is then: which
>> way? And is the direction different for display and camera, which are
>> kind of reflections of each other?
> 
> In general I would recommend choosing whichever device you would
> sensibly think of as a master. In the camera case I would choose the
> camera controller node instead of the camera itself, and in the display
> case I would choose the display controller instead of the panel. The
> binding author needs to choose what she things makes the most sense, but
> drivers can still use if it it turns out to be 'backwards'

I would perhaps choose the same approach, but at the same time I think
it's all but clear. The display controller doesn't control the panel any
more than a DMA controller controls, say, the display controller.

In fact, in earlier versions of OMAP DSS DT support I had a simpler port
description, and in that I had the panel as the master (i.e. link from
panel to dispc) because the panel driver uses the display controller's
features to provide the panel device a data stream.

And even with the current OMAP DSS DT version, which uses the v4l2 style
ports/endpoints, the driver model is still the same, and only links
towards upstream are used.

So one reason I'm happy with the dual-linking is that I can easily
follow the links from the downstream entities to upstream entities, and
other people, who have different driver model, can easily do the opposite.

But I agree that single-linking is enough and this can be handled at
runtime, even if it makes the code more complex. And perhaps requires
extra data in the dts, to give the start points for the graph.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-10 Thread Tomi Valkeinen
On 08/03/14 13:41, Grant Likely wrote:

 Ok. If we go for single directional link, the question is then: which
 way? And is the direction different for display and camera, which are
 kind of reflections of each other?
 
 In general I would recommend choosing whichever device you would
 sensibly think of as a master. In the camera case I would choose the
 camera controller node instead of the camera itself, and in the display
 case I would choose the display controller instead of the panel. The
 binding author needs to choose what she things makes the most sense, but
 drivers can still use if it it turns out to be 'backwards'

I would perhaps choose the same approach, but at the same time I think
it's all but clear. The display controller doesn't control the panel any
more than a DMA controller controls, say, the display controller.

In fact, in earlier versions of OMAP DSS DT support I had a simpler port
description, and in that I had the panel as the master (i.e. link from
panel to dispc) because the panel driver uses the display controller's
features to provide the panel device a data stream.

And even with the current OMAP DSS DT version, which uses the v4l2 style
ports/endpoints, the driver model is still the same, and only links
towards upstream are used.

So one reason I'm happy with the dual-linking is that I can easily
follow the links from the downstream entities to upstream entities, and
other people, who have different driver model, can easily do the opposite.

But I agree that single-linking is enough and this can be handled at
runtime, even if it makes the code more complex. And perhaps requires
extra data in the dts, to give the start points for the graph.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-10 Thread Laurent Pinchart
On Monday 10 March 2014 12:18:20 Tomi Valkeinen wrote:
 On 08/03/14 13:41, Grant Likely wrote:
  Ok. If we go for single directional link, the question is then: which
  way? And is the direction different for display and camera, which are
  kind of reflections of each other?
  
  In general I would recommend choosing whichever device you would
  sensibly think of as a master. In the camera case I would choose the
  camera controller node instead of the camera itself, and in the display
  case I would choose the display controller instead of the panel. The
  binding author needs to choose what she things makes the most sense, but
  drivers can still use if it it turns out to be 'backwards'
 
 I would perhaps choose the same approach, but at the same time I think
 it's all but clear. The display controller doesn't control the panel any
 more than a DMA controller controls, say, the display controller.
 
 In fact, in earlier versions of OMAP DSS DT support I had a simpler port
 description, and in that I had the panel as the master (i.e. link from
 panel to dispc) because the panel driver uses the display controller's
 features to provide the panel device a data stream.
 
 And even with the current OMAP DSS DT version, which uses the v4l2 style
 ports/endpoints, the driver model is still the same, and only links
 towards upstream are used.
 
 So one reason I'm happy with the dual-linking is that I can easily
 follow the links from the downstream entities to upstream entities, and
 other people, who have different driver model, can easily do the opposite.
 
 But I agree that single-linking is enough and this can be handled at
 runtime, even if it makes the code more complex. And perhaps requires
 extra data in the dts, to give the start points for the graph.

In theory unidirectional links in DT are indeed enough. However, let's not 
forget the following.

- There's no such thing as single start points for graphs. Sure, in some 
simple cases the graph will have a single start point, but that's not a 
generic rule. For instance the camera graphs 
http://ideasonboard.org/media/omap3isp.ps and 
http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus two 
starting points from a data flow point of view. And if you want a better 
understanding of how complex media graphs can become, have a look at 
http://ideasonboard.org/media/vsp1.0.pdf (that's a real world example, albeit 
all connections are internal to the SoC in that particular case, and don't 
need to be described in DT).

- There's also no such thing as a master device that can just point to slave 
devices. Once again simple cases exist where that model could work, but real 
world examples exist of complex pipelines with dozens of elements all 
implemented by a separate IP core and handled by separate drivers, forming a 
graph with long chains and branches. We thus need real graph bindings.

- Finally, having no backlinks in DT would make the software implementation 
very complex. We need to be able to walk the graph in a generic way without 
having any of the IP core drivers loaded, and without any specific starting 
point. We would thus need to parse the complete DT tree, looking at all nodes 
and trying to find out whether they're part of the graph we're trying to walk. 
The complexity of the operation would be at best quadratic to the number of 
nodes in the whole DT and to the number of nodes in the graph.

-- 
Regards,

Laurent Pinchart


signature.asc
Description: This is a digitally signed message part.


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-10 Thread Tomi Valkeinen
On 10/03/14 15:52, Laurent Pinchart wrote:

 In theory unidirectional links in DT are indeed enough. However, let's not 
 forget the following.
 
 - There's no such thing as single start points for graphs. Sure, in some 
 simple cases the graph will have a single start point, but that's not a 
 generic rule. For instance the camera graphs 
 http://ideasonboard.org/media/omap3isp.ps and 
 http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus two 
 starting points from a data flow point of view. And if you want a better 
 understanding of how complex media graphs can become, have a look at 
 http://ideasonboard.org/media/vsp1.0.pdf (that's a real world example, albeit 
 all connections are internal to the SoC in that particular case, and don't 
 need to be described in DT).
 
 - There's also no such thing as a master device that can just point to slave 
 devices. Once again simple cases exist where that model could work, but real 
 world examples exist of complex pipelines with dozens of elements all 
 implemented by a separate IP core and handled by separate drivers, forming a 
 graph with long chains and branches. We thus need real graph bindings.
 
 - Finally, having no backlinks in DT would make the software implementation 
 very complex. We need to be able to walk the graph in a generic way without 
 having any of the IP core drivers loaded, and without any specific starting 
 point. We would thus need to parse the complete DT tree, looking at all nodes 
 and trying to find out whether they're part of the graph we're trying to 
 walk. 
 The complexity of the operation would be at best quadratic to the number of 
 nodes in the whole DT and to the number of nodes in the graph.

I did use plural when I said to give the start points

If you have a list of starting points in the DT, a graph helper or
something could create a runtime representation of the graph at some
early phase during the boot, which would include backlinks. The
individual drivers could use that runtime graph, instead of the DT graph.

But it still sounds considerably more complex than double-links in DT.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-10 Thread Grant Likely
On Mon, 10 Mar 2014 12:18:20 +0200, Tomi Valkeinen tomi.valkei...@ti.com 
wrote:
 On 08/03/14 13:41, Grant Likely wrote:
 
  Ok. If we go for single directional link, the question is then: which
  way? And is the direction different for display and camera, which are
  kind of reflections of each other?
  
  In general I would recommend choosing whichever device you would
  sensibly think of as a master. In the camera case I would choose the
  camera controller node instead of the camera itself, and in the display
  case I would choose the display controller instead of the panel. The
  binding author needs to choose what she things makes the most sense, but
  drivers can still use if it it turns out to be 'backwards'
 
 I would perhaps choose the same approach, but at the same time I think
 it's all but clear. The display controller doesn't control the panel any
 more than a DMA controller controls, say, the display controller.

In a sense it doesn't actually matter. You sensibly want to choose the
most likely direction that drivers would go looking for a device it
depends on, but it can be matched up at runtime regardless of the
direction chosen by the binding.

g.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-10 Thread Grant Likely
On Mon, 10 Mar 2014 14:52:53 +0100, Laurent Pinchart 
laurent.pinch...@ideasonboard.com wrote:
 On Monday 10 March 2014 12:18:20 Tomi Valkeinen wrote:
  On 08/03/14 13:41, Grant Likely wrote:
   Ok. If we go for single directional link, the question is then: which
   way? And is the direction different for display and camera, which are
   kind of reflections of each other?
   
   In general I would recommend choosing whichever device you would
   sensibly think of as a master. In the camera case I would choose the
   camera controller node instead of the camera itself, and in the display
   case I would choose the display controller instead of the panel. The
   binding author needs to choose what she things makes the most sense, but
   drivers can still use if it it turns out to be 'backwards'
  
  I would perhaps choose the same approach, but at the same time I think
  it's all but clear. The display controller doesn't control the panel any
  more than a DMA controller controls, say, the display controller.
  
  In fact, in earlier versions of OMAP DSS DT support I had a simpler port
  description, and in that I had the panel as the master (i.e. link from
  panel to dispc) because the panel driver uses the display controller's
  features to provide the panel device a data stream.
  
  And even with the current OMAP DSS DT version, which uses the v4l2 style
  ports/endpoints, the driver model is still the same, and only links
  towards upstream are used.
  
  So one reason I'm happy with the dual-linking is that I can easily
  follow the links from the downstream entities to upstream entities, and
  other people, who have different driver model, can easily do the opposite.
  
  But I agree that single-linking is enough and this can be handled at
  runtime, even if it makes the code more complex. And perhaps requires
  extra data in the dts, to give the start points for the graph.
 
 In theory unidirectional links in DT are indeed enough. However, let's not 
 forget the following.
 
 - There's no such thing as single start points for graphs. Sure, in some 
 simple cases the graph will have a single start point, but that's not a 
 generic rule. For instance the camera graphs 
 http://ideasonboard.org/media/omap3isp.ps and 
 http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus two 
 starting points from a data flow point of view. And if you want a better 
 understanding of how complex media graphs can become, have a look at 
 http://ideasonboard.org/media/vsp1.0.pdf (that's a real world example, albeit 
 all connections are internal to the SoC in that particular case, and don't 
 need to be described in DT).
 
 - There's also no such thing as a master device that can just point to slave 
 devices. Once again simple cases exist where that model could work, but real 
 world examples exist of complex pipelines with dozens of elements all 
 implemented by a separate IP core and handled by separate drivers, forming a 
 graph with long chains and branches. We thus need real graph bindings.
 
 - Finally, having no backlinks in DT would make the software implementation 
 very complex. We need to be able to walk the graph in a generic way without 
 having any of the IP core drivers loaded, and without any specific starting 
 point. We would thus need to parse the complete DT tree, looking at all nodes 
 and trying to find out whether they're part of the graph we're trying to 
 walk. 
 The complexity of the operation would be at best quadratic to the number of 
 nodes in the whole DT and to the number of nodes in the graph.

Not really. To being with, you cannot determine any meaning of a node
across the tree (aside from it being an endpoint) without also
understanding the binding that the node is a part of. That means you
need to have something matching against the compatible string on both
ends of the linkage. For instance:

panel {
compatible = acme,lvds-panel;
lvds-port: port {
};
};

display-controller {
compatible = encom,video;
port {
remote-endpoint = lvds-port;
};
};

In the above example, the encom,video driver has absolutely zero
information about what the acme,lvds-panel binding actually implements.
There needs to be both a driver for the acme,lvds-panel binding and
one for the encom,video binding (even if the acme,lvds-panel binding
is very thin and defers the functionality to the video controller).

What you want here is the drivers to register each side of the
connection. That could be modeled with something like the following
(pseudocode):

struct of_endpoint {
struct list_head list;
struct device_node *ep_node;
void *context;
void (*cb)(struct of_endpoint *ep, void *data);
}

int of_register_port(struct device *node, void (*cb)(struct of_endpoint *ep, 
void *data), void *data)
{
struct of_endpoint *ep = kzalloc(sizeof(*ep), GFP_KERNEL);

ep-ep_node = node;

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-10 Thread Laurent Pinchart
Hi Grant,

On Monday 10 March 2014 14:58:15 Grant Likely wrote:
 On Mon, 10 Mar 2014 14:52:53 +0100, Laurent Pinchart wrote:
  On Monday 10 March 2014 12:18:20 Tomi Valkeinen wrote:
   On 08/03/14 13:41, Grant Likely wrote:
Ok. If we go for single directional link, the question is then: which
way? And is the direction different for display and camera, which are
kind of reflections of each other?

In general I would recommend choosing whichever device you would
sensibly think of as a master. In the camera case I would choose the
camera controller node instead of the camera itself, and in the
display case I would choose the display controller instead of the
panel. The binding author needs to choose what she things makes the
most sense, but drivers can still use if it it turns out to be
'backwards'
   
   I would perhaps choose the same approach, but at the same time I think
   it's all but clear. The display controller doesn't control the panel any
   more than a DMA controller controls, say, the display controller.
   
   In fact, in earlier versions of OMAP DSS DT support I had a simpler port
   description, and in that I had the panel as the master (i.e. link from
   panel to dispc) because the panel driver uses the display controller's
   features to provide the panel device a data stream.
   
   And even with the current OMAP DSS DT version, which uses the v4l2 style
   ports/endpoints, the driver model is still the same, and only links
   towards upstream are used.
   
   So one reason I'm happy with the dual-linking is that I can easily
   follow the links from the downstream entities to upstream entities, and
   other people, who have different driver model, can easily do the
   opposite.
   
   But I agree that single-linking is enough and this can be handled at
   runtime, even if it makes the code more complex. And perhaps requires
   extra data in the dts, to give the start points for the graph.
  
  In theory unidirectional links in DT are indeed enough. However, let's not
  forget the following.
  
  - There's no such thing as single start points for graphs. Sure, in some
  simple cases the graph will have a single start point, but that's not a
  generic rule. For instance the camera graphs
  http://ideasonboard.org/media/omap3isp.ps and
  http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus
  two starting points from a data flow point of view. And if you want a
  better understanding of how complex media graphs can become, have a look
  at http://ideasonboard.org/media/vsp1.0.pdf (that's a real world example,
  albeit all connections are internal to the SoC in that particular case,
  and don't need to be described in DT).
  
  - There's also no such thing as a master device that can just point to
  slave devices. Once again simple cases exist where that model could work,
  but real world examples exist of complex pipelines with dozens of
  elements all implemented by a separate IP core and handled by separate
  drivers, forming a graph with long chains and branches. We thus need real
  graph bindings.
  
  - Finally, having no backlinks in DT would make the software
  implementation very complex. We need to be able to walk the graph in a
  generic way without having any of the IP core drivers loaded, and without
  any specific starting point. We would thus need to parse the complete DT
  tree, looking at all nodes and trying to find out whether they're part of
  the graph we're trying to walk. The complexity of the operation would be
  at best quadratic to the number of nodes in the whole DT and to the number
  of nodes in the graph.
 
 Not really. To being with, you cannot determine any meaning of a node
 across the tree (aside from it being an endpoint)

That's the important part. I can assume the target node of the remote-endpoint 
phandle to be an endpoint, and can thus assume that it implements the of-graph 
bindings. That's all I need to be able to walk the graph in a generic way.

 without also understanding the binding that the node is a part of. That
 means you need to have something matching against the compatible string on
 both ends of the linkage. For instance:
 
 panel {
   compatible = acme,lvds-panel;
   lvds-port: port {
   };
 };
 
 display-controller {
   compatible = encom,video;
   port {
   remote-endpoint = lvds-port;
   };
 };
 
 In the above example, the encom,video driver has absolutely zero
 information about what the acme,lvds-panel binding actually implements.
 There needs to be both a driver for the acme,lvds-panel binding and
 one for the encom,video binding (even if the acme,lvds-panel binding
 is very thin and defers the functionality to the video controller).

I absolutely agree with that. We need a driver for each device (in this case 
the acme panel and the encom display controller), and we need those drivers to 
register entities (in the generic sense of 

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-10 Thread Philipp Zabel
Am Montag, den 10.03.2014, 16:15 +0100 schrieb Laurent Pinchart:
 Hi Grant,
 
 On Monday 10 March 2014 14:58:15 Grant Likely wrote:
  On Mon, 10 Mar 2014 14:52:53 +0100, Laurent Pinchart wrote:
   On Monday 10 March 2014 12:18:20 Tomi Valkeinen wrote:
On 08/03/14 13:41, Grant Likely wrote:
 Ok. If we go for single directional link, the question is then: which
 way? And is the direction different for display and camera, which are
 kind of reflections of each other?
 
 In general I would recommend choosing whichever device you would
 sensibly think of as a master. In the camera case I would choose the
 camera controller node instead of the camera itself, and in the
 display case I would choose the display controller instead of the
 panel. The binding author needs to choose what she things makes the
 most sense, but drivers can still use if it it turns out to be
 'backwards'

I would perhaps choose the same approach, but at the same time I think
it's all but clear. The display controller doesn't control the panel any
more than a DMA controller controls, say, the display controller.

In fact, in earlier versions of OMAP DSS DT support I had a simpler port
description, and in that I had the panel as the master (i.e. link from
panel to dispc) because the panel driver uses the display controller's
features to provide the panel device a data stream.

And even with the current OMAP DSS DT version, which uses the v4l2 style
ports/endpoints, the driver model is still the same, and only links
towards upstream are used.

So one reason I'm happy with the dual-linking is that I can easily
follow the links from the downstream entities to upstream entities, and
other people, who have different driver model, can easily do the
opposite.

But I agree that single-linking is enough and this can be handled at
runtime, even if it makes the code more complex. And perhaps requires
extra data in the dts, to give the start points for the graph.
   
   In theory unidirectional links in DT are indeed enough. However, let's not
   forget the following.
   
   - There's no such thing as single start points for graphs. Sure, in some
   simple cases the graph will have a single start point, but that's not a
   generic rule. For instance the camera graphs
   http://ideasonboard.org/media/omap3isp.ps and
   http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus
   two starting points from a data flow point of view. And if you want a
   better understanding of how complex media graphs can become, have a look
   at http://ideasonboard.org/media/vsp1.0.pdf (that's a real world example,
   albeit all connections are internal to the SoC in that particular case,
   and don't need to be described in DT).
   
   - There's also no such thing as a master device that can just point to
   slave devices. Once again simple cases exist where that model could work,
   but real world examples exist of complex pipelines with dozens of
   elements all implemented by a separate IP core and handled by separate
   drivers, forming a graph with long chains and branches. We thus need real
   graph bindings.
   
   - Finally, having no backlinks in DT would make the software
   implementation very complex. We need to be able to walk the graph in a
   generic way without having any of the IP core drivers loaded, and without
   any specific starting point. We would thus need to parse the complete DT
   tree, looking at all nodes and trying to find out whether they're part of
   the graph we're trying to walk. The complexity of the operation would be
   at best quadratic to the number of nodes in the whole DT and to the number
   of nodes in the graph.
  
  Not really. To being with, you cannot determine any meaning of a node
  across the tree (aside from it being an endpoint)
 
 That's the important part. I can assume the target node of the 
 remote-endpoint 
 phandle to be an endpoint, and can thus assume that it implements the 
 of-graph 
 bindings. That's all I need to be able to walk the graph in a generic way.
 
  without also understanding the binding that the node is a part of. That
  means you need to have something matching against the compatible string on
  both ends of the linkage. For instance:
  
  panel {
  compatible = acme,lvds-panel;
  lvds-port: port {
  };
  };
  
  display-controller {
  compatible = encom,video;
  port {
  remote-endpoint = lvds-port;
  };
  };
  
  In the above example, the encom,video driver has absolutely zero
  information about what the acme,lvds-panel binding actually implements.
  There needs to be both a driver for the acme,lvds-panel binding and
  one for the encom,video binding (even if the acme,lvds-panel binding
  is very thin and defers the functionality to the video controller).
 
 I absolutely agree with that. We need a driver 

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-08 Thread Sylwester Nawrocki

On 03/08/2014 12:41 PM, Grant Likely wrote:

Another thought. In terms of the pattern, I would add a recommendation
>  >  that there should be a way to identify ports of a particular type. ie.
>  >  If I were using the pattern to implement an patch bay of DSP filters,
>  >  where each input and output, then each target node should have a unique
>  >  identifier property analogous to "interrupt-controller" or
>  >  "gpio-controller". In this fictitious example I would probably choose
>  >  "audiostream-input-port" and "audiostream-output-port" as empty
>  >  properties in the port nodes. (I'm not suggesting a change to the
>  >  existing binding, but making a recommendation to new users).

>
>  I don't see any harm in that, but I don't right away see what could be
>  done with them? Did you have something in mind?


It would help with schema validation and allow ports of the same
interface to get grouped together.


>  I guess those could be used to study the graph before the drivers have
>  been loaded. After the drivers have been loaded, the drivers should
>  somehow register themselves and the ports/endpoints. And as the driver
>  knows what kind of ports they are, it can supply this information in the
>  runtime data structures.
>
>  If we do that, would it be better to have two pieces of data:
>  input/output/bi-directional, and the port type (say, mipi-dpi, lvds, etc.)?


I'm not sure about the direction information (it also came up when we
originally discussed this binding), but the port type information would
be useful. As it turns out, it is not always possible to describe a data
bus interface type/data transmission protocol with a set of primitive
properties in an endpoint node. Especially in cases when the physical
layer is basically identical, for instance in case of MIPI CSI-2 and
SMIA CCP2, or in future MIPI CSI-3 and MIPI CSI-2.

In general there could likely be more than one supported, if both the
transmitter and the receiver are sufficiently configurable.

Then, to be able to fully describe a data port, perhaps we could
add a property like 'interface-type' or 'bus-type' with values like
"mipi-dbi", mipi-dsi", "mipi-csi2", "smia-ccp2", "hdmi", etc.


Sure. That's worth considering.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-08 Thread Russell King - ARM Linux
On Sat, Mar 08, 2014 at 01:05:50AM +0800, Grant Likely wrote:
> On Wed, 26 Feb 2014 15:48:49 +0100, Philipp Zabel  
> wrote:
> > Or two separate display controllers with two parallel outputs each, a
> > 4-port multiplexer, and an encoder (e.g. lvds):
> > 
> > ,--.
> > | display[port@0]--.   ,-.
> > | controller [port@1]-. `-[port@0]   |
> > `--´   `--[port@1]  mux  |
> > ,--.   ,--[port@2]   |,-.
> > | display[port@0]-´ ,-[port@3] [port@4]--[port] encoder |
> > | controller [port@1]--´   `-´`-´
> > `--´
> 
> Similar question here, how is the above complex assembled into a logical
> device? Is there an overarching controller device for this component
> complex?

This is exactly the problem we discussed at Kernel Summit during the DRM
et.al. session.  It's the multiple component problem.  We have this solved
in a subsystem and firmware independent manner.

The way this works in imx-drm is that the graph of connections is scanned
to discover which component devices are required, and once all the
components are present, we initialise the logical device.  Further than
that, the driver structure is an implemention specific detail.

The starting point is the CPU facing part, because that's what provides
the logical interfaces, especially as there may be more than one logical
interface (which each of which may be entirely separate kernel
subsystems - display vs capture.)

> >  ,.,-.
> >  | dc [port]--[port@0]   |
> >  `´| encoder |,---.
> >|   [port@1]--[port] panel |
> >`-´`---´ 
> 
> In all of the above examples I see a unidirectional data flow. There are
> producer ports and consumer ports. Is there any (reasonable) possibility
> of peer ports that are bidirectional?

Generally, they will be unidirectional, but when you realise that you want
to be able to walk the graph from the CPU point of view, the data flow
direction is meaningless.  What you need to know is how the components
are connected together.

If you model the links as per the data flow direction, then while you may
be able to walk the display side - eg, dc -> encoder -> panel due to your
DT links, you wouldn't be able to walk from the CPU interface down to your
camera sensor - you would need to do a full scan of the DT for every
interation to reverse the direction of the linkages.

> > According to video-interfaces.txt, it is expected that endpoints contain
> > phandles pointing to the remote endpoint on both sides. I'd like to
> > leave this up to the more specialized bindings, but I can see that this
> > makes enumerating the connections starting from each device tree node
> > easier, for example at probe time.
> 
> This has come up in the past. That approach comes down to premature
> optimization at the expense of making the data structure more prone to
> inconsistency. I consider it to be a bad pattern.
> 
> Backlinks are good for things like dynamically linked lists that need to
> be fast and the software fully manages the links. For a data structure like
> the FDT it is better to have the data in one place, and one place only.
> Besides, computers are *good* at parsing data structures. :-)
> 
> I appreciate that existing drivers may be using the backlinks, but I
> strongly recommend not doing it for new users.

"Backlinks" doesn't convey to me exactly what you mean here.  Are you
talking about the ones in the opposite direction to data flow, or the
ones pointing upstream towards the CPU?  (I'm going to use upstream to
refer to the ones pointing towards the CPU.)

That matters: as I've said above, when creating logical devices, you
tend to start at the CPU interfaces and work your way down the graph.
Working the opposite way is much harder, especially if you hit some
kind of muxing arranagement which results in two sensors which combine
into one logical grabbing device.

> > If the back links are not provided in the device tree, a device at the
> > receiving end of a remote-endpoint phandle can only know about connected
> > remote ports after the kernel parsed the whole graph and created the
> > back links itself.
> 
> A whole tree parse isn't expensive. We do it all the time.

A while tree parse may not be expensive, but to do it multiple times
increases the cost manyfold.  Taking the first example above, it may
be needed to do that five times, one time per link, and that may be
just one subsystem.  It could very easily become 10 times, at which
point the expense is an order of magnitude more than it was.

We do have real hardware which is arranged as per that first diagram.
It's the iMX6Q - two IPUs each with two display controllers, giving a
total of four output video streams.  Those are then fed into a set of
muxes which then feed into a set of encoders.  Effectively, all those

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-08 Thread Grant Likely
On Sat, 8 Mar 2014 12:33:12 +0200, Tomi Valkeinen  wrote:
> On 07/03/14 19:05, Grant Likely wrote:
> > On Wed, 26 Feb 2014 15:48:49 +0100, Philipp Zabel  
> > wrote:
> >> Hi Grant,
> >>
> >> thank you for the comments.
> > 
> > Hi Philipp,
> > 
> > I've got lots of comments and quesitons below, but I must say thank you
> > for doing this. It is a helpful description.
> 
> Thanks for the comments. I'll answer from my point of view, which may be
> different than Philipp's.
> 
> > Bear with me, I'm going to comment on each point. I'm not criticizing,
> > but I do want to understand the HW design. In particular I want to
> > understand why the linkage is treated as bidirectional instead of one
> > device being the master.
> > 
> > In this specific example, what would be managing the transfer? Is there
> > an overarching driver that assembles the pieces, or would you expect
> > individual drivers to find each other?
> 
> The direction of the dataflow depends on the case. For camera, the data
> flows from external components towards the SoC. For display, vice versa.
> I don't know much about camera, but for display there are bi-directional
> buses, like MIPI DBI and MIPI DSI. However, in those cases, the main
> flow direction is clear, towards the display.
> 
> Then again, these are meant to be generic graph bindings, and the
> dataflow could be fully bi-directional.
> 
> As for the driver model (does it matter when we are talking about DT
> bindings?), I think there are very different opinions. My thought for
> display is that there is always an overarching driver, something that
> manages the video pipeline as a whole.
> 
> However, each of the individual drivers in the video pipeline will
> control its input ports. So, say, a panel driver gets a command from the
> overarching control driver to enable the panel device. The panel driver
> reacts by calling the encoder (i.e. the one that outputs pixels to the
> panel), commanding it to enable the video stream.
> 
> I know some (many?) people think the other way around, that the encoder
> commands the panel to enable itself. I don't think that's versatile
> enough. As a theoretical, slightly exaggerated, example, we could have a
> panel that wants the pixel clock to be enabled for a short while before
> enabling the actual pixel stream. And maybe the panel driver needs to do
> something before the pixel stream is enabled (say, send a command to the
> panel).
> 
> The above is very easy to do if the panel driver is in control of the
> received video stream. It's rather difficult to do if the encoder is in
> control.
> 
> Also, I think a panel (or any encoder) can be compared to a display
> controller: a display controller uses DMA to transfer pixel data from
> the memory to the display controller. A panel or encoder uses the
> incoming video bus to transfer pixel data from the previous display
> component to the panel or encoder device. And I don't think anyone says
> the DMA driver should be in control of the display controller.
> 
> But this is going quite a bit into the SW architecture. Does it matter
> when we are talking about the representation of HW in the DT data?

It matter to the point that the binding author makes a decision about
how to represent the hardware. The driver architecture doesn't need to
match the binding layout, but it is nice when they line up.

> > In all of the above examples I see a unidirectional data flow. There are
> > producer ports and consumer ports. Is there any (reasonable) possibility
> > of peer ports that are bidirectional?
> 
> Yes, as I mentioned above. However, I do believe that at the moment all
> the cases have a clear a main direction. But then, if these are generic
> bindings, do we want to rely on that?

Most of my questions are making sure I've got a full understanding. Even
if there are typical devices that have bidirectional interfaces I think
my decisions as a binding author would be the same. Given an arbitrary
graph of devices, I would still choose one place to describe a
connection. If it was a complex setup I may use a completely separate
node that collects all the element devices. On more straightforward
setups I would choose one of the devices as the 'master' and have it
describe the connections to other component devices.

> >> According to video-interfaces.txt, it is expected that endpoints contain
> >> phandles pointing to the remote endpoint on both sides. I'd like to
> >> leave this up to the more specialized bindings, but I can see that this
> >> makes enumerating the connections starting from each device tree node
> >> easier, for example at probe time.
> > 
> > This has come up in the past. That approach comes down to premature
> > optimization at the expense of making the data structure more prone to
> > inconsistency. I consider it to be a bad pattern.
> > 
> > Backlinks are good for things like dynamically linked lists that need to
> > be fast and the software fully manages the links. For a data 

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-08 Thread Tomi Valkeinen
On 07/03/14 19:06, Grant Likely wrote:
> On Thu, 27 Feb 2014 10:36:36 +0200, Tomi Valkeinen  
> wrote:
>> On 26/02/14 16:48, Philipp Zabel wrote:
>>
 I would like the document to acknowledge the difference from the
 phandle+args pattern used elsewhere and a description of when it would
 be appropriate to use this instead of a simpler binding.
>>>
>>> Alright. The main point of this binding is that the devices may have
>>> multiple distinct ports that each can be connected to other devices.
>>
>> The other main point with this binding are multiple endpoints per port.
>> So you can have, say, a display controller, with single port, which has
>> two endpoints going to two separate LCD panels.
>>
>> In physical level that would usually mean that the same pins from the
>> display controller are connected to two panels. Most likely this would
>> mean that only one panel can be used at a time, possibly with different
>> settings (say, 16 RGB pins for one panel, 24 RGB pins for the other).
> 
> What device is in control in that scenario?

The endpoints in a single port are exclusive, only one can be active at
a time. So the control for the active path would be no different than in
single panel case (for which people have different opinions).

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-08 Thread Tomi Valkeinen
On 07/03/14 19:05, Grant Likely wrote:
> On Wed, 26 Feb 2014 15:48:49 +0100, Philipp Zabel  
> wrote:
>> Hi Grant,
>>
>> thank you for the comments.
> 
> Hi Philipp,
> 
> I've got lots of comments and quesitons below, but I must say thank you
> for doing this. It is a helpful description.

Thanks for the comments. I'll answer from my point of view, which may be
different than Philipp's.

> Bear with me, I'm going to comment on each point. I'm not criticizing,
> but I do want to understand the HW design. In particular I want to
> understand why the linkage is treated as bidirectional instead of one
> device being the master.
> 
> In this specific example, what would be managing the transfer? Is there
> an overarching driver that assembles the pieces, or would you expect
> individual drivers to find each other?

The direction of the dataflow depends on the case. For camera, the data
flows from external components towards the SoC. For display, vice versa.
I don't know much about camera, but for display there are bi-directional
buses, like MIPI DBI and MIPI DSI. However, in those cases, the main
flow direction is clear, towards the display.

Then again, these are meant to be generic graph bindings, and the
dataflow could be fully bi-directional.

As for the driver model (does it matter when we are talking about DT
bindings?), I think there are very different opinions. My thought for
display is that there is always an overarching driver, something that
manages the video pipeline as a whole.

However, each of the individual drivers in the video pipeline will
control its input ports. So, say, a panel driver gets a command from the
overarching control driver to enable the panel device. The panel driver
reacts by calling the encoder (i.e. the one that outputs pixels to the
panel), commanding it to enable the video stream.

I know some (many?) people think the other way around, that the encoder
commands the panel to enable itself. I don't think that's versatile
enough. As a theoretical, slightly exaggerated, example, we could have a
panel that wants the pixel clock to be enabled for a short while before
enabling the actual pixel stream. And maybe the panel driver needs to do
something before the pixel stream is enabled (say, send a command to the
panel).

The above is very easy to do if the panel driver is in control of the
received video stream. It's rather difficult to do if the encoder is in
control.

Also, I think a panel (or any encoder) can be compared to a display
controller: a display controller uses DMA to transfer pixel data from
the memory to the display controller. A panel or encoder uses the
incoming video bus to transfer pixel data from the previous display
component to the panel or encoder device. And I don't think anyone says
the DMA driver should be in control of the display controller.

But this is going quite a bit into the SW architecture. Does it matter
when we are talking about the representation of HW in the DT data?

> In all of the above examples I see a unidirectional data flow. There are
> producer ports and consumer ports. Is there any (reasonable) possibility
> of peer ports that are bidirectional?

Yes, as I mentioned above. However, I do believe that at the moment all
the cases have a clear a main direction. But then, if these are generic
bindings, do we want to rely on that?

>> According to video-interfaces.txt, it is expected that endpoints contain
>> phandles pointing to the remote endpoint on both sides. I'd like to
>> leave this up to the more specialized bindings, but I can see that this
>> makes enumerating the connections starting from each device tree node
>> easier, for example at probe time.
> 
> This has come up in the past. That approach comes down to premature
> optimization at the expense of making the data structure more prone to
> inconsistency. I consider it to be a bad pattern.
> 
> Backlinks are good for things like dynamically linked lists that need to
> be fast and the software fully manages the links. For a data structure like
> the FDT it is better to have the data in one place, and one place only.
> Besides, computers are *good* at parsing data structures. :-)
> 
> I appreciate that existing drivers may be using the backlinks, but I
> strongly recommend not doing it for new users.

Ok. If we go for single directional link, the question is then: which
way? And is the direction different for display and camera, which are
kind of reflections of each other?

> Another thought. In terms of the pattern, I would add a recommendation
> that there should be a way to identify ports of a particular type. ie.
> If I were using the pattern to implement an patch bay of DSP filters,
> where each input and output, then each target node should have a unique
> identifier property analogous to "interrupt-controller" or
> "gpio-controller". In this fictitious example I would probably choose
> "audiostream-input-port" and "audiostream-output-port" as empty
> properties in the 

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-08 Thread Tomi Valkeinen
On 07/03/14 19:05, Grant Likely wrote:
 On Wed, 26 Feb 2014 15:48:49 +0100, Philipp Zabel p.za...@pengutronix.de 
 wrote:
 Hi Grant,

 thank you for the comments.
 
 Hi Philipp,
 
 I've got lots of comments and quesitons below, but I must say thank you
 for doing this. It is a helpful description.

Thanks for the comments. I'll answer from my point of view, which may be
different than Philipp's.

 Bear with me, I'm going to comment on each point. I'm not criticizing,
 but I do want to understand the HW design. In particular I want to
 understand why the linkage is treated as bidirectional instead of one
 device being the master.
 
 In this specific example, what would be managing the transfer? Is there
 an overarching driver that assembles the pieces, or would you expect
 individual drivers to find each other?

The direction of the dataflow depends on the case. For camera, the data
flows from external components towards the SoC. For display, vice versa.
I don't know much about camera, but for display there are bi-directional
buses, like MIPI DBI and MIPI DSI. However, in those cases, the main
flow direction is clear, towards the display.

Then again, these are meant to be generic graph bindings, and the
dataflow could be fully bi-directional.

As for the driver model (does it matter when we are talking about DT
bindings?), I think there are very different opinions. My thought for
display is that there is always an overarching driver, something that
manages the video pipeline as a whole.

However, each of the individual drivers in the video pipeline will
control its input ports. So, say, a panel driver gets a command from the
overarching control driver to enable the panel device. The panel driver
reacts by calling the encoder (i.e. the one that outputs pixels to the
panel), commanding it to enable the video stream.

I know some (many?) people think the other way around, that the encoder
commands the panel to enable itself. I don't think that's versatile
enough. As a theoretical, slightly exaggerated, example, we could have a
panel that wants the pixel clock to be enabled for a short while before
enabling the actual pixel stream. And maybe the panel driver needs to do
something before the pixel stream is enabled (say, send a command to the
panel).

The above is very easy to do if the panel driver is in control of the
received video stream. It's rather difficult to do if the encoder is in
control.

Also, I think a panel (or any encoder) can be compared to a display
controller: a display controller uses DMA to transfer pixel data from
the memory to the display controller. A panel or encoder uses the
incoming video bus to transfer pixel data from the previous display
component to the panel or encoder device. And I don't think anyone says
the DMA driver should be in control of the display controller.

But this is going quite a bit into the SW architecture. Does it matter
when we are talking about the representation of HW in the DT data?

 In all of the above examples I see a unidirectional data flow. There are
 producer ports and consumer ports. Is there any (reasonable) possibility
 of peer ports that are bidirectional?

Yes, as I mentioned above. However, I do believe that at the moment all
the cases have a clear a main direction. But then, if these are generic
bindings, do we want to rely on that?

 According to video-interfaces.txt, it is expected that endpoints contain
 phandles pointing to the remote endpoint on both sides. I'd like to
 leave this up to the more specialized bindings, but I can see that this
 makes enumerating the connections starting from each device tree node
 easier, for example at probe time.
 
 This has come up in the past. That approach comes down to premature
 optimization at the expense of making the data structure more prone to
 inconsistency. I consider it to be a bad pattern.
 
 Backlinks are good for things like dynamically linked lists that need to
 be fast and the software fully manages the links. For a data structure like
 the FDT it is better to have the data in one place, and one place only.
 Besides, computers are *good* at parsing data structures. :-)
 
 I appreciate that existing drivers may be using the backlinks, but I
 strongly recommend not doing it for new users.

Ok. If we go for single directional link, the question is then: which
way? And is the direction different for display and camera, which are
kind of reflections of each other?

 Another thought. In terms of the pattern, I would add a recommendation
 that there should be a way to identify ports of a particular type. ie.
 If I were using the pattern to implement an patch bay of DSP filters,
 where each input and output, then each target node should have a unique
 identifier property analogous to interrupt-controller or
 gpio-controller. In this fictitious example I would probably choose
 audiostream-input-port and audiostream-output-port as empty
 properties in the port nodes. (I'm not suggesting a change to 

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-08 Thread Tomi Valkeinen
On 07/03/14 19:06, Grant Likely wrote:
 On Thu, 27 Feb 2014 10:36:36 +0200, Tomi Valkeinen tomi.valkei...@ti.com 
 wrote:
 On 26/02/14 16:48, Philipp Zabel wrote:

 I would like the document to acknowledge the difference from the
 phandle+args pattern used elsewhere and a description of when it would
 be appropriate to use this instead of a simpler binding.

 Alright. The main point of this binding is that the devices may have
 multiple distinct ports that each can be connected to other devices.

 The other main point with this binding are multiple endpoints per port.
 So you can have, say, a display controller, with single port, which has
 two endpoints going to two separate LCD panels.

 In physical level that would usually mean that the same pins from the
 display controller are connected to two panels. Most likely this would
 mean that only one panel can be used at a time, possibly with different
 settings (say, 16 RGB pins for one panel, 24 RGB pins for the other).
 
 What device is in control in that scenario?

The endpoints in a single port are exclusive, only one can be active at
a time. So the control for the active path would be no different than in
single panel case (for which people have different opinions).

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-08 Thread Grant Likely
On Sat, 8 Mar 2014 12:33:12 +0200, Tomi Valkeinen tomi.valkei...@ti.com wrote:
 On 07/03/14 19:05, Grant Likely wrote:
  On Wed, 26 Feb 2014 15:48:49 +0100, Philipp Zabel p.za...@pengutronix.de 
  wrote:
  Hi Grant,
 
  thank you for the comments.
  
  Hi Philipp,
  
  I've got lots of comments and quesitons below, but I must say thank you
  for doing this. It is a helpful description.
 
 Thanks for the comments. I'll answer from my point of view, which may be
 different than Philipp's.
 
  Bear with me, I'm going to comment on each point. I'm not criticizing,
  but I do want to understand the HW design. In particular I want to
  understand why the linkage is treated as bidirectional instead of one
  device being the master.
  
  In this specific example, what would be managing the transfer? Is there
  an overarching driver that assembles the pieces, or would you expect
  individual drivers to find each other?
 
 The direction of the dataflow depends on the case. For camera, the data
 flows from external components towards the SoC. For display, vice versa.
 I don't know much about camera, but for display there are bi-directional
 buses, like MIPI DBI and MIPI DSI. However, in those cases, the main
 flow direction is clear, towards the display.
 
 Then again, these are meant to be generic graph bindings, and the
 dataflow could be fully bi-directional.
 
 As for the driver model (does it matter when we are talking about DT
 bindings?), I think there are very different opinions. My thought for
 display is that there is always an overarching driver, something that
 manages the video pipeline as a whole.
 
 However, each of the individual drivers in the video pipeline will
 control its input ports. So, say, a panel driver gets a command from the
 overarching control driver to enable the panel device. The panel driver
 reacts by calling the encoder (i.e. the one that outputs pixels to the
 panel), commanding it to enable the video stream.
 
 I know some (many?) people think the other way around, that the encoder
 commands the panel to enable itself. I don't think that's versatile
 enough. As a theoretical, slightly exaggerated, example, we could have a
 panel that wants the pixel clock to be enabled for a short while before
 enabling the actual pixel stream. And maybe the panel driver needs to do
 something before the pixel stream is enabled (say, send a command to the
 panel).
 
 The above is very easy to do if the panel driver is in control of the
 received video stream. It's rather difficult to do if the encoder is in
 control.
 
 Also, I think a panel (or any encoder) can be compared to a display
 controller: a display controller uses DMA to transfer pixel data from
 the memory to the display controller. A panel or encoder uses the
 incoming video bus to transfer pixel data from the previous display
 component to the panel or encoder device. And I don't think anyone says
 the DMA driver should be in control of the display controller.
 
 But this is going quite a bit into the SW architecture. Does it matter
 when we are talking about the representation of HW in the DT data?

It matter to the point that the binding author makes a decision about
how to represent the hardware. The driver architecture doesn't need to
match the binding layout, but it is nice when they line up.

  In all of the above examples I see a unidirectional data flow. There are
  producer ports and consumer ports. Is there any (reasonable) possibility
  of peer ports that are bidirectional?
 
 Yes, as I mentioned above. However, I do believe that at the moment all
 the cases have a clear a main direction. But then, if these are generic
 bindings, do we want to rely on that?

Most of my questions are making sure I've got a full understanding. Even
if there are typical devices that have bidirectional interfaces I think
my decisions as a binding author would be the same. Given an arbitrary
graph of devices, I would still choose one place to describe a
connection. If it was a complex setup I may use a completely separate
node that collects all the element devices. On more straightforward
setups I would choose one of the devices as the 'master' and have it
describe the connections to other component devices.

  According to video-interfaces.txt, it is expected that endpoints contain
  phandles pointing to the remote endpoint on both sides. I'd like to
  leave this up to the more specialized bindings, but I can see that this
  makes enumerating the connections starting from each device tree node
  easier, for example at probe time.
  
  This has come up in the past. That approach comes down to premature
  optimization at the expense of making the data structure more prone to
  inconsistency. I consider it to be a bad pattern.
  
  Backlinks are good for things like dynamically linked lists that need to
  be fast and the software fully manages the links. For a data structure like
  the FDT it is better to have the data in one place, and one 

  1   2   >