Re: [PATCH/RFC v3 00/19] Common Display Framework

2013-10-18 Thread Andrzej Hajda
On 10/17/2013 02:55 PM, Tomi Valkeinen wrote:
 On 17/10/13 15:26, Andrzej Hajda wrote:

 I am not sure what exactly the encoder performs, if this is only image
 transport from dispc to panel CDF pipeline in both cases should look like:
 dispc  panel
 The only difference is that panels will be connected via different Linux bus
 adapters, but it will be irrelevant to CDF itself. In this case I would say
 this is DSI-master rather than encoder, or at least that the only
 function of the
 encoder is DSI.
 Yes, as I said, it's up to the driver writer how he wants to use CDF. If
 he doesn't see the point of representing the SoC's DSI encoder as a
 separate CDF entity, nobody forces him to do that.
Having it as an entity would cause the 'problem' of two APIs as you
described below :)
One API via control bus, another one via CDF.

 On OMAP, we have single DISPC with multiple parallel outputs, and a
 bunch of encoder IPs (MIPI DPI, DSI, DBI, etc). Each encoder IP can be
 connected to some of the DISPC's output. In this case, even if the DSI
 encoder does nothing special, I see it much better to represent the DSI
 encoder as a CDF entity so that the links between DISPC, DSI, and the
 DSI peripherals are all there.

 If display_timings on input and output differs, I suppose it should be
 modeled
 as display_entity, as this is an additional functionality(not covered by
 DSI standard AFAIK).
 Well, DSI standard is about the DSI output. Not about the encoder's
 input, or the internal operation of the encoder.

 Of course there are some settings which are not panel dependent and those
 should reside in DSI node.
 Exactly. And when the two panels require different non-panel-dependent
 settings, how do you represent them in the DT data?
 non-panel-dependent setting cannot depend on panel, by definition :)
 With non-panel-dependent setting I meant something that is a property
 of the DSI master device, but still needs to be configured differently
 for each panel.

 Say, pin configuration. When using panel A, the first pin of the DSI
 block could be clock+. With panel B, the first pin could be clock-. This
 configuration is about DSI master, but it is different for each panel.

 If we have separate endpoint in the DSI master for each panel, this data
 can be there. If we don't have the endpoint, as is the case with
 separate control bus, where is that data?
I am open to propositions. For me it seems somehow similar to clock mapping
in DT (clock-names are mapped to provider clocks), so I think it could
be put in panel node and it will be parsed by DSI-master.

 Could you describe such scenario?
 If we have two independent APIs, ctrl and video, that affect the same
 underlying hardware, the DSI bus, we could have a scenario like this:

 thread 1:

 ctrl-op_foo();
 ctrl-op_bar();

 thread 2:

 video-op_baz();

 Even if all those ops do locking properly internally, the fact that
 op_baz() can be called in between op_foo() and op_bar() may cause problems.

 To avoid that issue with two APIs we'd need something like:

 thread 1:

 ctrl-lock();
 ctrl-op_foo();
 ctrl-op_bar();
 ctrl-unlock();

 thread 2:

 video-lock();
 video-op_baz();
 video-unlock();
 I should mention I was asking for real hw/drivers configuration.
 I do not know what do you mean with video-op_baz() ?
 DSI-master is not modeled in CDF, and only CDF provides video
 operations.
 It was just an example of the additional complexity with regarding
 locking when using two APIs.

 The point is that if the panel driver has two pointers (i.e. API), one
 for the control bus, one for the video bus, and ops on both buses affect
 the same hardware, the locking is not easy.

 If, on the other hand, the panel driver only has one API to use, it's
 simple to require the caller to handle any locking.
I guess you are describing scenario with DSI-master having its own entity.
In such case its video ops are accessible at least to all pipeline
neightbourgs and
to pipeline controler, so I do not see how the client side locking would
work anyway.
Additionally multiple panels connected to one DSI also makes it harder.
Thus I do not see that 'client lock' apporach would work anyway, even
using video-source approach.

Andrzej


--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH/RFC v3 00/19] Common Display Framework

2013-10-17 Thread Andrzej Hajda
Hi Tomi,

Sorry for delayed response.


On 10/11/2013 04:45 PM, Tomi Valkeinen wrote:
 On 11/10/13 17:16, Andrzej Hajda wrote:

 Picture size, content and format is the same on input and on output of DSI.
 The same bits which enters DSI appears on the output. Internally bits
 order can
 be different but practically you are configuring DSI master and slave
 with the same format.

 If you create DSI entity you will have to always set the same format and
 size on DSI input, DSI output and encoder input.
 If you skip creating DSI entity you loose nothing, and you do not need
 to take care of it.
 Well, this is really a different question from the bus problem. But
 nothing says the DSI master cannot change the format or even size. For
 sure it can change the video timings. The DSI master could even take two
 parallel inputs, and combine them into one DSI output. You don't can't
 what all the possible pieces of hardware do =)
 If you have a bigger IP block that internally contains the DISPC and the
 DSI, then, yes, you can combine them into one display entity. I don't
 think that's correct, though. And if the DISPC and DSI are independent
 blocks, then especially I think there must be an entity for the DSI
 block, which will enable the powers, clocks, etc, when needed.
The main function of DSI is to transport pixels from one IP to another IP
and this function IMO should not be modeled by display entity.
Power, clocks, etc will be performed via control bus according to
panel demands.
If 'DSI chip' has additional functions for video processing they can
be modeled by CDF entity if it makes sense.
 Well, one point of the endpoints is also to allow switching of video
 devices.

 For example, I could have a board with a SoC's DSI output, connected to
 two DSI panels. There would be some kind of mux between, so that I can
 select which of the panels is actually connected to the SoC.

 Here the first panel could use 2 datalanes, the second one 4. Thus, the
 DSI master would have two endpoints, the other one using 2 and the other
 4 datalanes.

 If we decide that kind of support is not needed, well, is there even
 need for the V4L2 endpoints in the DT data at all?
 Hmm, both panels connected to one endpoint of dispc ?
 The problem I see is which driver should handle panel switching,
 but this is question about hardware design as well. If this is realized
 by dispc I have told already the solution. If this is realized by other
 device I do not see a problem to create corresponding CDF entity,
 or maybe it can be handled by Pipeline Controller ???
 Well the switching could be automatic, when the panel power is enabled,
 the DSI mux is switched for that panel. It's not relevant.

 We still have two different endpoint configurations for the same
 DSI-master port. If that configuration is in the DSI-master's port node,
 not inside an endpoint data, then that can't be supported.
I am not sure if I understand it correctly. But it seems quite simple:
when panel starts/resumes it request DSI (via control bus) to fulfill
its configuration settings.
Of course there are some settings which are not panel dependent and those
should reside in DSI node.
 I agree that having DSI/DBI control and video separated would be
 elegant. But I'd like to hear what is the technical benefit of that? At
 least to me it's clearly more complex to separate them than to keep them
 together (to the extent that I don't yet see how it is even possible),
 so there must be a good reason for the separation. I don't understand
 that reason. What is it?
 Roughly speaking it is a question where is the more convenient place to
 put bunch
 of opses, technically both solutions can be somehow implemented.
 Well, it's also about dividing a single physical bus into two separate
 interfaces to it. It sounds to me that it would be much more complex
 with locking. With a single API, we can just say the caller handles
 locking. With two separate interfaces, there must be locking at the
 lower level.
 We say then: callee handles locking :)
 Sure, but my point was that the caller handling the locking is much
 simpler than the callee handling locking. And the latter causes
 atomicity issues, as the other API could be invoked in between two calls
 for the first API.

 
Could you describe such scenario?
 But note that I'm not saying we should not implement bus model just
 because it's more complex. We should go for bus model if it's better. I
 just want to bring up these complexities, which I feel are quite more
 difficult than with the simpler model.

 Pros of mipi bus:
 - no fake entity in CDF, with fake opses, I have to use similar entities
 in MIPI-CSI
 camera pipelines and it complicates life without any benefit(at least
 from user side),
 You mean the DSI-master? I don't see how it's fake, it's a video
 processing unit that has to be configured. Even if we forget the control
 side, and just think about plain video stream with DSI video mode,
 there's are things to 

Re: [PATCH/RFC v3 00/19] Common Display Framework

2013-10-17 Thread Tomi Valkeinen
On 17/10/13 10:48, Andrzej Hajda wrote:

 The main function of DSI is to transport pixels from one IP to another IP
 and this function IMO should not be modeled by display entity.
 Power, clocks, etc will be performed via control bus according to
 panel demands.
 If 'DSI chip' has additional functions for video processing they can
 be modeled by CDF entity if it makes sense.

Now I don't follow. What do you mean with display entity and with CDF
entity? Are they the same?

Let me try to clarify my point:

On OMAP SoC we have a DSI encoder, which takes input from the display
controller in parallel RGB format, and outputs DSI.

Then there are external encoders that take MIPI DPI as input, and output
DSI.

The only difference with the above two components is that the first one
is embedded into the SoC. I see no reason to represent them in different
ways (i.e. as you suggested, not representing the SoC's DSI at all).

Also, if you use DSI burst mode, you will have to have different video
timings in the DSI encoder's input and output. And depending on the
buffering of the DSI encoder, you could have different timings in any case.

Furthermore, both components could have extra processing. I know the
external encoders sometimes do have features like scaling.

 We still have two different endpoint configurations for the same
 DSI-master port. If that configuration is in the DSI-master's port node,
 not inside an endpoint data, then that can't be supported.
 I am not sure if I understand it correctly. But it seems quite simple:
 when panel starts/resumes it request DSI (via control bus) to fulfill
 its configuration settings.
 Of course there are some settings which are not panel dependent and those
 should reside in DSI node.

Exactly. And when the two panels require different non-panel-dependent
settings, how do you represent them in the DT data?

 We say then: callee handles locking :)
 Sure, but my point was that the caller handling the locking is much
 simpler than the callee handling locking. And the latter causes
 atomicity issues, as the other API could be invoked in between two calls
 for the first API.

 
 Could you describe such scenario?

If we have two independent APIs, ctrl and video, that affect the same
underlying hardware, the DSI bus, we could have a scenario like this:

thread 1:

ctrl-op_foo();
ctrl-op_bar();

thread 2:

video-op_baz();

Even if all those ops do locking properly internally, the fact that
op_baz() can be called in between op_foo() and op_bar() may cause problems.

To avoid that issue with two APIs we'd need something like:

thread 1:

ctrl-lock();
ctrl-op_foo();
ctrl-op_bar();
ctrl-unlock();

thread 2:

video-lock();
video-op_baz();
video-unlock();

 Platform devices
 
 Platform devices are devices that typically appear as autonomous
 entities in the system. This includes legacy port-based devices and
 host bridges to peripheral buses, and most controllers integrated
 into system-on-chip platforms.  What they usually have in common
 is direct addressing from a CPU bus.  Rarely, a platform_device will
 be connected through a segment of some other kind of bus; but its
 registers will still be directly addressable.
 Yep, typically and rarely =). I agree, it's not clear. I think there
 are things with DBI/DSI that clearly point to a platform device, but
 also the other way.
 Just to be sure, we are talking here about DSI-slaves, ie. for example
 about panels,
 where direct accessing from CPU bus usually is not possible.

Yes. My point is that with DBI/DSI there's not much bus there (if a
normal bus would be PCI/USB/i2c etc), it's just a point to point link
without probing or a clearly specified setup sequence.

If DSI/DBI was used only for control, a linux bus would probably make
sense. But DSI/DBI is mainly a video transport channel, with the
control-part being secondary.

And when considering that the video and control data are sent over the
same channel (i.e. there's no separate, independent ctrl channel), and
the strict timing restrictions with video, my gut feeling is just that
all the extra complexity brought with separating the control to a
separate bus is not worth it.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH/RFC v3 00/19] Common Display Framework

2013-10-17 Thread Andrzej Hajda
On 10/17/2013 10:18 AM, Tomi Valkeinen wrote:
 On 17/10/13 10:48, Andrzej Hajda wrote:

 The main function of DSI is to transport pixels from one IP to another IP
 and this function IMO should not be modeled by display entity.
 Power, clocks, etc will be performed via control bus according to
 panel demands.
 If 'DSI chip' has additional functions for video processing they can
 be modeled by CDF entity if it makes sense.
 Now I don't follow. What do you mean with display entity and with CDF
 entity? Are they the same?
Yes, they are the same, sorry for confusion.

 Let me try to clarify my point:

 On OMAP SoC we have a DSI encoder, which takes input from the display
 controller in parallel RGB format, and outputs DSI.

 Then there are external encoders that take MIPI DPI as input, and output
 DSI.

 The only difference with the above two components is that the first one
 is embedded into the SoC. I see no reason to represent them in different
 ways (i.e. as you suggested, not representing the SoC's DSI at all).

 Also, if you use DSI burst mode, you will have to have different video
 timings in the DSI encoder's input and output. And depending on the
 buffering of the DSI encoder, you could have different timings in any case.
I am not sure what exactly the encoder performs, if this is only image
transport from dispc to panel CDF pipeline in both cases should look like:
dispc  panel
The only difference is that panels will be connected via different Linux bus
adapters, but it will be irrelevant to CDF itself. In this case I would say
this is DSI-master rather than encoder, or at least that the only
function of the
encoder is DSI.

If display_timings on input and output differs, I suppose it should be
modeled
as display_entity, as this is an additional functionality(not covered by
DSI standard AFAIK).
CDF in such case:
dispc --- encoder --- panel
In this case I would call it encoder with DSI master.


 Furthermore, both components could have extra processing. I know the
 external encoders sometimes do have features like scaling.
The same as above, ISP with embedded DSI.

 We still have two different endpoint configurations for the same
 DSI-master port. If that configuration is in the DSI-master's port node,
 not inside an endpoint data, then that can't be supported.
 I am not sure if I understand it correctly. But it seems quite simple:
 when panel starts/resumes it request DSI (via control bus) to fulfill
 its configuration settings.
 Of course there are some settings which are not panel dependent and those
 should reside in DSI node.
 Exactly. And when the two panels require different non-panel-dependent
 settings, how do you represent them in the DT data?

non-panel-dependent setting cannot depend on panel, by definition :)

 We say then: callee handles locking :)
 Sure, but my point was that the caller handling the locking is much
 simpler than the callee handling locking. And the latter causes
 atomicity issues, as the other API could be invoked in between two calls
 for the first API.

 
 Could you describe such scenario?
 If we have two independent APIs, ctrl and video, that affect the same
 underlying hardware, the DSI bus, we could have a scenario like this:

 thread 1:

 ctrl-op_foo();
 ctrl-op_bar();

 thread 2:

 video-op_baz();

 Even if all those ops do locking properly internally, the fact that
 op_baz() can be called in between op_foo() and op_bar() may cause problems.

 To avoid that issue with two APIs we'd need something like:

 thread 1:

 ctrl-lock();
 ctrl-op_foo();
 ctrl-op_bar();
 ctrl-unlock();

 thread 2:

 video-lock();
 video-op_baz();
 video-unlock();
I should mention I was asking for real hw/drivers configuration.
I do not know what do you mean with video-op_baz() ?
DSI-master is not modeled in CDF, and only CDF provides video
operations.

I guess one scenario, when two panels are connected to single DSI-master.
In such case both can call DSI ops, but I do not know how do you want to
prevent it in case of your CDF-T implementation.


 Platform devices
 
 Platform devices are devices that typically appear as autonomous
 entities in the system. This includes legacy port-based devices and
 host bridges to peripheral buses, and most controllers integrated
 into system-on-chip platforms.  What they usually have in common
 is direct addressing from a CPU bus.  Rarely, a platform_device will
 be connected through a segment of some other kind of bus; but its
 registers will still be directly addressable.
 Yep, typically and rarely =). I agree, it's not clear. I think there
 are things with DBI/DSI that clearly point to a platform device, but
 also the other way.
 Just to be sure, we are talking here about DSI-slaves, ie. for example
 about panels,
 where direct accessing from CPU bus usually is not possible.
 Yes. My point is that with DBI/DSI there's not much bus there (if a
 normal bus would be PCI/USB/i2c etc), it's just a point to point link
 without probing 

Re: [PATCH/RFC v3 00/19] Common Display Framework

2013-10-17 Thread Tomi Valkeinen
On 17/10/13 15:26, Andrzej Hajda wrote:

 I am not sure what exactly the encoder performs, if this is only image
 transport from dispc to panel CDF pipeline in both cases should look like:
 dispc  panel
 The only difference is that panels will be connected via different Linux bus
 adapters, but it will be irrelevant to CDF itself. In this case I would say
 this is DSI-master rather than encoder, or at least that the only
 function of the
 encoder is DSI.

Yes, as I said, it's up to the driver writer how he wants to use CDF. If
he doesn't see the point of representing the SoC's DSI encoder as a
separate CDF entity, nobody forces him to do that.

On OMAP, we have single DISPC with multiple parallel outputs, and a
bunch of encoder IPs (MIPI DPI, DSI, DBI, etc). Each encoder IP can be
connected to some of the DISPC's output. In this case, even if the DSI
encoder does nothing special, I see it much better to represent the DSI
encoder as a CDF entity so that the links between DISPC, DSI, and the
DSI peripherals are all there.

 If display_timings on input and output differs, I suppose it should be
 modeled
 as display_entity, as this is an additional functionality(not covered by
 DSI standard AFAIK).

Well, DSI standard is about the DSI output. Not about the encoder's
input, or the internal operation of the encoder.

 Of course there are some settings which are not panel dependent and those
 should reside in DSI node.
 Exactly. And when the two panels require different non-panel-dependent
 settings, how do you represent them in the DT data?
 
 non-panel-dependent setting cannot depend on panel, by definition :)

With non-panel-dependent setting I meant something that is a property
of the DSI master device, but still needs to be configured differently
for each panel.

Say, pin configuration. When using panel A, the first pin of the DSI
block could be clock+. With panel B, the first pin could be clock-. This
configuration is about DSI master, but it is different for each panel.

If we have separate endpoint in the DSI master for each panel, this data
can be there. If we don't have the endpoint, as is the case with
separate control bus, where is that data?

 Could you describe such scenario?
 If we have two independent APIs, ctrl and video, that affect the same
 underlying hardware, the DSI bus, we could have a scenario like this:

 thread 1:

 ctrl-op_foo();
 ctrl-op_bar();

 thread 2:

 video-op_baz();

 Even if all those ops do locking properly internally, the fact that
 op_baz() can be called in between op_foo() and op_bar() may cause problems.

 To avoid that issue with two APIs we'd need something like:

 thread 1:

 ctrl-lock();
 ctrl-op_foo();
 ctrl-op_bar();
 ctrl-unlock();

 thread 2:

 video-lock();
 video-op_baz();
 video-unlock();
 I should mention I was asking for real hw/drivers configuration.
 I do not know what do you mean with video-op_baz() ?
 DSI-master is not modeled in CDF, and only CDF provides video
 operations.

It was just an example of the additional complexity with regarding
locking when using two APIs.

The point is that if the panel driver has two pointers (i.e. API), one
for the control bus, one for the video bus, and ops on both buses affect
the same hardware, the locking is not easy.

If, on the other hand, the panel driver only has one API to use, it's
simple to require the caller to handle any locking.

 I guess one scenario, when two panels are connected to single DSI-master.
 In such case both can call DSI ops, but I do not know how do you want to
 prevent it in case of your CDF-T implementation.

No, that was not the case I was describing. This was about a single panel.

If we have two independent APIs, we need to define how locking is
managed for those APIs. Even if in practice both APIs are used by the
same driver, and the driver can manage the locking, that's not really a
valid requirement. It'd be almost the same as requiring that gpio API
cannot be called at the same time as i2c API.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH/RFC v3 00/19] Common Display Framework

2013-10-11 Thread Tomi Valkeinen
On 09/10/13 17:08, Andrzej Hajda wrote:

 As I have adopted existing internal driver for MIPI-DSI bus, I did not
 take too much
 care for DT. You are right, 'bta-timeout' is a configuration parameter
 (however its
 minimal value is determined by characteristic of the DSI-slave). On the
 other
 side currently there is no good place for such configuration parameters
 AFAIK.

The minimum bta-timeout should be deducable from the DSI bus speed,
shouldn't it? Thus there's no need to define it anywhere.

 - enable_hs and enable_te, used to enable/disable HS mode and
 tearing-elimination
 
 It seems there should be a way to synchronize TE signal with panel,
 in case signal is provided only to dsi-master. Some callback I suppose?
 Or transfer synchronization should be done by dsi-master.

Hmm, can you explain a bit what you mean?

Do you mean that the panel driver should get a callback when DSI TE
trigger happens?

On OMAP, when using DSI TE trigger, the dsi-master does it all. So the
panel driver just calls update() on the dsi-master, and then the
dsi-master will wait for TE, and then start the transfer. There's also a
callback to the panel driver when the transfer has completed.

 - set_max_rx_packet_size, used to configure the max rx packet size.
 Similar callbacks should be added to mipi-dsi-bus ops as well, to
 make it complete/generic.

Do you mean the same calls should exist both in the mipi-dbi-bus ops and
on the video ops? If they are called with different values, which one
wins?

 http://article.gmane.org/gmane.comp.video.dri.devel/90651
 http://article.gmane.org/gmane.comp.video.dri.devel/91269
 http://article.gmane.org/gmane.comp.video.dri.devel/91272

 I still think that it's best to consider DSI and DBI as a video bus (not
 as a separate video bus and a control bus), and provide the packet
 transfer methods as part of the video ops.
 I have read all posts regarding this issue and currently I tend
 to solution where CDF is used to model only video streams,
 with control bus implemented in different framework.
 The only concerns I have if we should use Linux bus for that.

Ok. I have many other concerns, as I've expressed in the mails =). I
still don't see how it could work. So I'd very much like to see a more
detailed explanation how the separate control  video bus approach would
deal with different scenarios.

Let's consider a DSI-to-HDMI encoder chip. Version A of the chip is
controlled via DSI, version B is controlled via i2c. As the output of
the chip goes to HDMI connector, the DSI bus speed needs to be set
according to the resolution of the HDMI monitor.

So, with version A, the encoder driver would have some kind of pointers
to ctrl_ops and video_ops (or, pointers to dsi_bus instance and
video_bus instance), right? The ctrl_ops would need to have ops like
set_bus_speed, enable_hs, etc, to configure the DSI bus.

When the encoder driver is started, it'd probably set some safe bus
speed, configure the encoder a bit, read the EDID, enable HS,
re-configure the bus speed to match the monitor's video mode, configure
the encoder, and at last enable the video stream.

Version B would have i2c_client and video_ops. When the driver starts,
it'd  probably do the same things as above, except the control messages
would go through i2c. That means that setting the bus speed, enabling
HS, etc, would happen through video_ops, as the i2c side has no
knowledge of the DSI side, right? Would there be identical ops on both
DSI ctrl and video ops?

That sounds very bad. What am I missing here? How would it work?

And, if we want to separate the video and control, I see no reason to
explicitly require the video side to be present. I.e. we could as well
have a DSI peripheral that has only the control bus used. How would that
reflect to, say, the DT presentation? Say, if we have a version A of the
encoder, we could have DT data like this (just a rough example):

soc-dsi {
encoder {
input: endpoint {
remote-endpoint = soc-dsi-ep;
/* configuration for the DSI lanes */
dsi-lanes = 0 1 2 3 4 5;
};
};
};

So the encoder would be places inside the SoC's DSI node, similar to how
an i2c device would be placed inside SoC's i2c node. DSI configuration
would be inside the video endpoint data.

Version B would be almost the same:

i2c0 {
encoder {
input: endpoint {
remote-endpoint = soc-dsi-ep;
/* configuration for the DSI lanes */
dsi-lanes = 0 1 2 3 4 5;
};
};
};

Now, how would the video-bus-less device be defined? It'd be inside the
soc-dsi node, that's clear. Where would the DSI lane configuration be?
Not inside 'endpoint' node, as that's for video and wouldn't exist in
this case. Would we have the same lane configuration in two places, once
for video and once for control?

I agree that 

Re: [PATCH/RFC v3 00/19] Common Display Framework

2013-10-11 Thread Andrzej Hajda
On 10/11/2013 08:37 AM, Tomi Valkeinen wrote:
 On 09/10/13 17:08, Andrzej Hajda wrote:

 As I have adopted existing internal driver for MIPI-DSI bus, I did not
 take too much
 care for DT. You are right, 'bta-timeout' is a configuration parameter
 (however its
 minimal value is determined by characteristic of the DSI-slave). On the
 other
 side currently there is no good place for such configuration parameters
 AFAIK.
 The minimum bta-timeout should be deducable from the DSI bus speed,
 shouldn't it? Thus there's no need to define it anywhere.
Hmm, specification says This specified period shall be longer then
the maximum possible turnaround delay for the unit to which the
turnaround request was sent.

 - enable_hs and enable_te, used to enable/disable HS mode and
 tearing-elimination
 It seems there should be a way to synchronize TE signal with panel,
 in case signal is provided only to dsi-master. Some callback I suppose?
 Or transfer synchronization should be done by dsi-master.
 Hmm, can you explain a bit what you mean?

 Do you mean that the panel driver should get a callback when DSI TE
 trigger happens?

 On OMAP, when using DSI TE trigger, the dsi-master does it all. So the
 panel driver just calls update() on the dsi-master, and then the
 dsi-master will wait for TE, and then start the transfer. There's also a
 callback to the panel driver when the transfer has completed.
Yes I though about a callback, but approach with DSI-master taking care
of synchronization in fact better fits to exynos-dsi and I suspect to
omap also.

 - set_max_rx_packet_size, used to configure the max rx packet size.
 Similar callbacks should be added to mipi-dsi-bus ops as well, to
 make it complete/generic.
 Do you mean the same calls should exist both in the mipi-dbi-bus ops and
 on the video ops? If they are called with different values, which one
 wins?
No, I meant that if mipi-dbi-bus want to be complete it should have
similar ops.
I did not think about scenario with two overlaping APIs.

 http://article.gmane.org/gmane.comp.video.dri.devel/90651
 http://article.gmane.org/gmane.comp.video.dri.devel/91269
 http://article.gmane.org/gmane.comp.video.dri.devel/91272

 I still think that it's best to consider DSI and DBI as a video bus (not
 as a separate video bus and a control bus), and provide the packet
 transfer methods as part of the video ops.
 I have read all posts regarding this issue and currently I tend
 to solution where CDF is used to model only video streams,
 with control bus implemented in different framework.
 The only concerns I have if we should use Linux bus for that.
 Ok. I have many other concerns, as I've expressed in the mails =). I
 still don't see how it could work. So I'd very much like to see a more
 detailed explanation how the separate control  video bus approach would
 deal with different scenarios.

 Let's consider a DSI-to-HDMI encoder chip. Version A of the chip is
 controlled via DSI, version B is controlled via i2c. As the output of
 the chip goes to HDMI connector, the DSI bus speed needs to be set
 according to the resolution of the HDMI monitor.

 So, with version A, the encoder driver would have some kind of pointers
 to ctrl_ops and video_ops (or, pointers to dsi_bus instance and
 video_bus instance), right? The ctrl_ops would need to have ops like
 set_bus_speed, enable_hs, etc, to configure the DSI bus.

 When the encoder driver is started, it'd probably set some safe bus
 speed, configure the encoder a bit, read the EDID, enable HS,
 re-configure the bus speed to match the monitor's video mode, configure
 the encoder, and at last enable the video stream.

 Version B would have i2c_client and video_ops. When the driver starts,
 it'd  probably do the same things as above, except the control messages
 would go through i2c. That means that setting the bus speed, enabling
 HS, etc, would happen through video_ops, as the i2c side has no
 knowledge of the DSI side, right? Would there be identical ops on both
 DSI ctrl and video ops?

 That sounds very bad. What am I missing here? How would it work?
If I undrestand correctly you think about CDF topology like below:

DispContr(SoC) --- DSI-master(SoC) --- encoder(DSI or I2C)

But I think with mipi-dsi-bus topology could look like:

DispContr(SoC) --- encoder(DSI or I2C)

DSI-master will not have its own entity, in the graph it could be
represented
by the link(---), as it really does not process the video, only
transports it.

In case of version A I think everything is clear.
In case of version B it does not seems so nice at the first sight, but
still seems quite straightforward to me - special plink in encoder's
node pointing
to DSI-master, driver will find the device in runtime and use ops as needed
(additional ops/helpers required).
This is also the way to support devices which can be controlled by DSI
and I2C
in the same time. Anyway I suspect such scenario will be quite rare.


 And, if we want to separate the video and control, I see 

Re: [PATCH/RFC v3 00/19] Common Display Framework

2013-10-11 Thread Tomi Valkeinen
On 11/10/13 14:19, Andrzej Hajda wrote:
 On 10/11/2013 08:37 AM, Tomi Valkeinen wrote:

 The minimum bta-timeout should be deducable from the DSI bus speed,
 shouldn't it? Thus there's no need to define it anywhere.
 Hmm, specification says This specified period shall be longer then
 the maximum possible turnaround delay for the unit to which the
 turnaround request was sent.

Ah, you're right. We can't know how long the peripheral will take
responding. I was thinking of something that only depends on the
bus-speed and the timings for that.

 If I undrestand correctly you think about CDF topology like below:
 
 DispContr(SoC) --- DSI-master(SoC) --- encoder(DSI or I2C)
 
 But I think with mipi-dsi-bus topology could look like:
 
 DispContr(SoC) --- encoder(DSI or I2C)
 
 DSI-master will not have its own entity, in the graph it could be
 represented
 by the link(---), as it really does not process the video, only
 transports it.

At least in OMAP, the SoC's DSI-master receives parallel RGB data from
DISPC, and encodes it to DSI. Isn't that processing? It's basically a
DPI-to-DSI encoder. And it's not a simple pass-through, the DSI video
timings could be considerably different than the DPI timings.

 In case of version A I think everything is clear.
 In case of version B it does not seems so nice at the first sight, but
 still seems quite straightforward to me - special plink in encoder's
 node pointing
 to DSI-master, driver will find the device in runtime and use ops as needed
 (additional ops/helpers required).
 This is also the way to support devices which can be controlled by DSI
 and I2C
 in the same time. Anyway I suspect such scenario will be quite rare.

Okay, so if I gather it right, you say there would be something like
'dsi_adapter' (like i2c_adapter), which represents the dsi-master. And a
driver could get pointer to this, regardless of whether it the linux
device is a DSI device.

At least one issue with this approach is the endpoint problem (see below).

 And, if we want to separate the video and control, I see no reason to
 explicitly require the video side to be present. I.e. we could as well
 have a DSI peripheral that has only the control bus used. How would that
 reflect to, say, the DT presentation? Say, if we have a version A of the
 encoder, we could have DT data like this (just a rough example):

 soc-dsi {
  encoder {
  input: endpoint {
  remote-endpoint = soc-dsi-ep;
 Here I would replace soc-dsi-ep by phandle to display controller/crtc/
 
  /* configuration for the DSI lanes */
  dsi-lanes = 0 1 2 3 4 5;
 Wow, quite advanced DSI.

Wha? That just means there is one clock lane and two datalanes, nothing
more =). We can select the polarity of a lane, so we describe both the
positive and negative lines there. So it says clk- is connected to pin
0, clk+ connected to pin 1, etc.

  };
  };
 };

 So the encoder would be places inside the SoC's DSI node, similar to how
 an i2c device would be placed inside SoC's i2c node. DSI configuration
 would be inside the video endpoint data.

 Version B would be almost the same:

 i2c0 {
  encoder {
  input: endpoint {
  remote-endpoint = soc-dsi-ep;
 soc-dsi-ep = disp-ctrl-ep
  /* configuration for the DSI lanes */
  dsi-lanes = 0 1 2 3 4 5;
  };
  };
 };

 Now, how would the video-bus-less device be defined?
 It'd be inside the
 soc-dsi node, that's clear. Where would the DSI lane configuration be?
 Not inside 'endpoint' node, as that's for video and wouldn't exist in
 this case. Would we have the same lane configuration in two places, once
 for video and once for control?
 I think it is control setting, so it should be put outside endpoint node.
 Probably it could be placed in encoder node.

Well, one point of the endpoints is also to allow switching of video
devices.

For example, I could have a board with a SoC's DSI output, connected to
two DSI panels. There would be some kind of mux between, so that I can
select which of the panels is actually connected to the SoC.

Here the first panel could use 2 datalanes, the second one 4. Thus, the
DSI master would have two endpoints, the other one using 2 and the other
4 datalanes.

If we decide that kind of support is not needed, well, is there even
need for the V4L2 endpoints in the DT data at all?

 I agree that having DSI/DBI control and video separated would be
 elegant. But I'd like to hear what is the technical benefit of that? At
 least to me it's clearly more complex to separate them than to keep them
 together (to the extent that I don't yet see how it is even possible),
 so there must be a good reason for the separation. I don't understand
 that reason. What is it?
 Roughly speaking it is a question where is the more convenient place to
 put bunch
 of opses, technically both solutions can be somehow 

Re: [PATCH/RFC v3 00/19] Common Display Framework

2013-10-11 Thread Andrzej Hajda
On 10/11/2013 02:30 PM, Tomi Valkeinen wrote:
 On 11/10/13 14:19, Andrzej Hajda wrote:
 On 10/11/2013 08:37 AM, Tomi Valkeinen wrote:
 The minimum bta-timeout should be deducable from the DSI bus speed,
 shouldn't it? Thus there's no need to define it anywhere.
 Hmm, specification says This specified period shall be longer then
 the maximum possible turnaround delay for the unit to which the
 turnaround request was sent.
 Ah, you're right. We can't know how long the peripheral will take
 responding. I was thinking of something that only depends on the
 bus-speed and the timings for that.

 If I undrestand correctly you think about CDF topology like below:

 DispContr(SoC) --- DSI-master(SoC) --- encoder(DSI or I2C)

 But I think with mipi-dsi-bus topology could look like:

 DispContr(SoC) --- encoder(DSI or I2C)

 DSI-master will not have its own entity, in the graph it could be
 represented
 by the link(---), as it really does not process the video, only
 transports it.
 At least in OMAP, the SoC's DSI-master receives parallel RGB data from
 DISPC, and encodes it to DSI. Isn't that processing? It's basically a
 DPI-to-DSI encoder. And it's not a simple pass-through, the DSI video
 timings could be considerably different than the DPI timings.
Picture size, content and format is the same on input and on output of DSI.
The same bits which enters DSI appears on the output. Internally bits
order can
be different but practically you are configuring DSI master and slave
with the same format.

If you create DSI entity you will have to always set the same format and
size on DSI input, DSI output and encoder input.
If you skip creating DSI entity you loose nothing, and you do not need
to take care of it.


 In case of version A I think everything is clear.
 In case of version B it does not seems so nice at the first sight, but
 still seems quite straightforward to me - special plink in encoder's
 node pointing
 to DSI-master, driver will find the device in runtime and use ops as needed
 (additional ops/helpers required).
 This is also the way to support devices which can be controlled by DSI
 and I2C
 in the same time. Anyway I suspect such scenario will be quite rare.
 Okay, so if I gather it right, you say there would be something like
 'dsi_adapter' (like i2c_adapter), which represents the dsi-master. And a
 driver could get pointer to this, regardless of whether it the linux
 device is a DSI device.

 At least one issue with this approach is the endpoint problem (see below).

 And, if we want to separate the video and control, I see no reason to
 explicitly require the video side to be present. I.e. we could as well
 have a DSI peripheral that has only the control bus used. How would that
 reflect to, say, the DT presentation? Say, if we have a version A of the
 encoder, we could have DT data like this (just a rough example):

 soc-dsi {
 encoder {
 input: endpoint {
 remote-endpoint = soc-dsi-ep;
 Here I would replace soc-dsi-ep by phandle to display controller/crtc/

 /* configuration for the DSI lanes */
 dsi-lanes = 0 1 2 3 4 5;
 Wow, quite advanced DSI.
 Wha? That just means there is one clock lane and two datalanes, nothing
 more =). We can select the polarity of a lane, so we describe both the
 positive and negative lines there. So it says clk- is connected to pin
 0, clk+ connected to pin 1, etc.
OK in V4L binding world it means DSI with six lanes :)

 };
 };
 };

 So the encoder would be places inside the SoC's DSI node, similar to how
 an i2c device would be placed inside SoC's i2c node. DSI configuration
 would be inside the video endpoint data.

 Version B would be almost the same:

 i2c0 {
 encoder {
 input: endpoint {
 remote-endpoint = soc-dsi-ep;
 soc-dsi-ep = disp-ctrl-ep
 /* configuration for the DSI lanes */
 dsi-lanes = 0 1 2 3 4 5;
 };
 };
 };

 Now, how would the video-bus-less device be defined?
 It'd be inside the
 soc-dsi node, that's clear. Where would the DSI lane configuration be?
 Not inside 'endpoint' node, as that's for video and wouldn't exist in
 this case. Would we have the same lane configuration in two places, once
 for video and once for control?
 I think it is control setting, so it should be put outside endpoint node.
 Probably it could be placed in encoder node.
 Well, one point of the endpoints is also to allow switching of video
 devices.

 For example, I could have a board with a SoC's DSI output, connected to
 two DSI panels. There would be some kind of mux between, so that I can
 select which of the panels is actually connected to the SoC.

 Here the first panel could use 2 datalanes, the second one 4. Thus, the
 DSI master would have two endpoints, the other one using 2 and the other
 4 datalanes.

 If we decide that kind of support is not needed, well, is there even
 need for the 

Re: [PATCH/RFC v3 00/19] Common Display Framework

2013-10-11 Thread Tomi Valkeinen
On 11/10/13 17:16, Andrzej Hajda wrote:

 Picture size, content and format is the same on input and on output of DSI.
 The same bits which enters DSI appears on the output. Internally bits
 order can
 be different but practically you are configuring DSI master and slave
 with the same format.
 
 If you create DSI entity you will have to always set the same format and
 size on DSI input, DSI output and encoder input.
 If you skip creating DSI entity you loose nothing, and you do not need
 to take care of it.

Well, this is really a different question from the bus problem. But
nothing says the DSI master cannot change the format or even size. For
sure it can change the video timings. The DSI master could even take two
parallel inputs, and combine them into one DSI output. You don't can't
what all the possible pieces of hardware do =).

If you have a bigger IP block that internally contains the DISPC and the
DSI, then, yes, you can combine them into one display entity. I don't
think that's correct, though. And if the DISPC and DSI are independent
blocks, then especially I think there must be an entity for the DSI
block, which will enable the powers, clocks, etc, when needed.

 Well, one point of the endpoints is also to allow switching of video
 devices.

 For example, I could have a board with a SoC's DSI output, connected to
 two DSI panels. There would be some kind of mux between, so that I can
 select which of the panels is actually connected to the SoC.

 Here the first panel could use 2 datalanes, the second one 4. Thus, the
 DSI master would have two endpoints, the other one using 2 and the other
 4 datalanes.

 If we decide that kind of support is not needed, well, is there even
 need for the V4L2 endpoints in the DT data at all?
 Hmm, both panels connected to one endpoint of dispc ?
 The problem I see is which driver should handle panel switching,
 but this is question about hardware design as well. If this is realized
 by dispc I have told already the solution. If this is realized by other
 device I do not see a problem to create corresponding CDF entity,
 or maybe it can be handled by Pipeline Controller ???

Well the switching could be automatic, when the panel power is enabled,
the DSI mux is switched for that panel. It's not relevant.

We still have two different endpoint configurations for the same
DSI-master port. If that configuration is in the DSI-master's port node,
not inside an endpoint data, then that can't be supported.

 I agree that having DSI/DBI control and video separated would be
 elegant. But I'd like to hear what is the technical benefit of that? At
 least to me it's clearly more complex to separate them than to keep them
 together (to the extent that I don't yet see how it is even possible),
 so there must be a good reason for the separation. I don't understand
 that reason. What is it?
 Roughly speaking it is a question where is the more convenient place to
 put bunch
 of opses, technically both solutions can be somehow implemented.
 Well, it's also about dividing a single physical bus into two separate
 interfaces to it. It sounds to me that it would be much more complex
 with locking. With a single API, we can just say the caller handles
 locking. With two separate interfaces, there must be locking at the
 lower level.
 We say then: callee handles locking :)

Sure, but my point was that the caller handling the locking is much
simpler than the callee handling locking. And the latter causes
atomicity issues, as the other API could be invoked in between two calls
for the first API.

But note that I'm not saying we should not implement bus model just
because it's more complex. We should go for bus model if it's better. I
just want to bring up these complexities, which I feel are quite more
difficult than with the simpler model.

 Pros of mipi bus:
 - no fake entity in CDF, with fake opses, I have to use similar entities
 in MIPI-CSI
 camera pipelines and it complicates life without any benefit(at least
 from user side),
 You mean the DSI-master? I don't see how it's fake, it's a video
 processing unit that has to be configured. Even if we forget the control
 side, and just think about plain video stream with DSI video mode,
 there's are things to configure with it.

 What kind of issues you have in the CSI side, then?
 Not real issues, just needless calls to configure CSI entity pads,
 with the same format and picture sizes as in camera.

Well, the output of a component A is surely the same as the input of
component B, if B receives the data from A. So that does sound useless.
I don't do that kind of calls in my model.

 - CDF models only video buses, control bus is a domain of Linux buses,
 Yes, but in this case the buses are the same. It makes me a bit nervous
 to have two separate ways (video and control) to use the same bus, in a
 case like video where timing is critical.

 So yes, we can consider video and control buses as virtual buses, and
 the actual transport is the 

Re: [PATCH/RFC v3 00/19] Common Display Framework

2013-10-09 Thread Andrzej Hajda
On 10/02/2013 03:24 PM, Tomi Valkeinen wrote:
 Hi Andrzej,

 On 02/10/13 15:23, Andrzej Hajda wrote:

 Using Linux buses for DBI/DSI
 =

 I still don't see how it would work. I've covered this multiple times in
 previous posts so I'm not going into more details now.

 I implemented DSI (just command mode for now) as a video bus but with bunch 
 of
 extra ops for sending the control messages.
 Could you post the list of ops you have to create.
 I'd rather not post the ops I have in my prototype, as it's still a
 total hack. However, they are very much based on the current OMAP DSS's
 ops, so I'll describe them below. I hope I find time to polish my CDF
 hacks more, so that I can publish them.

 I have posted some time ago my implementation of DSI bus:
 http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure/69358/focus=69362
 A note about the DT data on your series, as I've been stuggling to
 figure out the DT data for OMAP: some of the DT properties look like
 configuration, not hardware description. For example,
 samsung,bta-timeout doesn't describe hardware.
As I have adopted existing internal driver for MIPI-DSI bus, I did not
take too much
care for DT. You are right, 'bta-timeout' is a configuration parameter
(however its
minimal value is determined by characteristic of the DSI-slave). On the
other
side currently there is no good place for such configuration parameters
AFAIK.
 I needed three quite generic ops to make it working:
 - set_power(on/off),
 - set_stream(on/off),
 - transfer(dsi_transaction_type, tx_buf, tx_len, rx_buf, rx_len)
 I have recently replaced set_power by PM_RUNTIME callbacks,
 but I had to add .initialize ops.
 We have a bit more on omap:

 http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/video/omapdss.h#n648

 Some of those should be removed and some should be omap DSI's internal
 matters, not part of the API. But it gives an idea of the ops we use.
 Shortly about the ops:

 - (dis)connect, which might be similar to your initialize. connect is
 meant to connect the pipeline, reserving the video ports used, etc.

 - enable/disable, enable the DSI bus. If the DSI peripheral requires a
 continous DSI clock, it's also started at this point.

 - set_config configures the DSI bus (like, command/video mode, etc.).

 - configure_pins can be ignored, I think that function is not needed.

 - enable_hs and enable_te, used to enable/disable HS mode and
 tearing-elimination

It seems there should be a way to synchronize TE signal with panel,
in case signal is provided only to dsi-master. Some callback I suppose?
Or transfer synchronization should be done by dsi-master.

 - update, which does a single frame transfer

 - bus_lock/unlock can be ignored

 - enable_video_output starts the video stream, when using DSI video mode

 - the request_vc, set_vc_id, release_vc can be ignored

 - Bunch of transfer funcs. Perhaps a single func could be used, as you
 do. We have sync write funcs, which do a BTA at the end of the write and
 wait for reply, and nosync version, which just pushes the packet to the
 TX buffers.

 - bta_sync, which sends a BTA and waits for the peripheral to reply

 - set_max_rx_packet_size, used to configure the max rx packet size.
Similar callbacks should be added to mipi-dsi-bus ops as well, to
make it complete/generic.


 Regarding the discussion how and where to implement control bus I have
 though about different alternatives:
 1. Implement DSI-master as a parent dev which will create DSI-slave
 platform dev in a similar way as for MFD devices (ssbi.c seems to me a
 good example).
 2. Create universal mipi-display-bus which will cover DSI, DBI and
 possibly other buses - they have have few common things - for example
 MIPI-DCS commands.

 I am not really convinced to either solution all have some advantages
 and disadvantages.
 I think a dedicated DSI bus and your alternatives all have the same
 issues with splitting the DSI control into two. I've shared some of my
 thoughts here:

 http://article.gmane.org/gmane.comp.video.dri.devel/90651
 http://article.gmane.org/gmane.comp.video.dri.devel/91269
 http://article.gmane.org/gmane.comp.video.dri.devel/91272

 I still think that it's best to consider DSI and DBI as a video bus (not
 as a separate video bus and a control bus), and provide the packet
 transfer methods as part of the video ops.
I have read all posts regarding this issue and currently I tend
to solution where CDF is used to model only video streams,
with control bus implemented in different framework.
The only concerns I have if we should use Linux bus for that.

Andrzej

  Tomi



--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH/RFC v3 00/19] Common Display Framework

2013-10-02 Thread Andrzej Hajda
Hi Tomi,

On 09/30/2013 03:48 PM, Tomi Valkeinen wrote:
 On 09/08/13 20:14, Laurent Pinchart wrote:
 Hi everybody,

 Here's the third RFC of the Common Display Framework.
 
 
 Hi,
 
 I've been trying to adapt the latest CDF RFC for OMAP. I'm trying to gather
 some notes here about what I've discovered or how I see things. Some of these 
 I
 have mentioned earlier, but I'm trying to collect them here nevertheless.
 
 I do have my branch with working DPI panel, TFP410 encoder, DVI-connector and
 DSI command mode panel drivers, and modifications to make omapdss work with
 CDF.  However, it's such a big hack, that I'm not going to post it. I hope I
 will have time to work on it to get something publishable to have something
 more concrete to present. But for the time being I have to move to other tasks
 for a while, so I thought I'd better post some comments when I still remember
 something about this.
 
 Using Linux buses for DBI/DSI
 =
 
 I still don't see how it would work. I've covered this multiple times in
 previous posts so I'm not going into more details now.
 
 I implemented DSI (just command mode for now) as a video bus but with bunch of
 extra ops for sending the control messages.

Could you post the list of ops you have to create.

I have posted some time ago my implementation of DSI bus:
http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure/69358/focus=69362

I needed three quite generic ops to make it working:
- set_power(on/off),
- set_stream(on/off),
- transfer(dsi_transaction_type, tx_buf, tx_len, rx_buf, rx_len)
I have recently replaced set_power by PM_RUNTIME callbacks,
but I had to add .initialize ops.

Regarding the discussion how and where to implement control bus I have
though about different alternatives:
1. Implement DSI-master as a parent dev which will create DSI-slave
platform dev in a similar way as for MFD devices (ssbi.c seems to me a
good example).
2. Create universal mipi-display-bus which will cover DSI, DBI and
possibly other buses - they have have few common things - for example
MIPI-DCS commands.

I am not really convinced to either solution all have some advantages
and disadvantages.


 
 Call model
 ==
 
 It may be that I just don't get how things are supposed to work with the RFC's
 ops, but I couldn't figure out how to use it in practice. I tried it for a few
 days, but got nowhere, and I then went with the proven model that's used in
 omapdss, where display entities handle calling the ops of the upstream
 entities.
 
 That's not to say the RFC's model doesn't work. I just didn't figure it out.
 And I guess it was more difficult to understand how to use it as the 
 controller
 stuff is not implemented yet.
 
 It would be good to have a bit more complex cases in the RFC, like changing 
 and
 verifying videomodes, fetching them via EDID, etc.
 
 Multiple inputs/outputs
 ===
 
 I think changing the model from single-input  single output to multiple 
 inputs
 and outputs increases the difficulty of the implementation considerably. 
 That's
 not a complaint as such, just an observation. I do think multiple inputs 
 outputs is a good feature. Then again, all the use cases I have only have
 single input/output, so I've been wondering if there's some middle road, in
 which we somehow allow multiple inputs  outputs, but only implement the
 support for single input  output.
 
 I've cut the corners in my tests by just looking at a single enabled input or
 output from an entity, and ignoring the rest (which I don't have in my use
 cases).
 
 Internal connections
 
 
 The model currently only represents connections between entities. With 
 multiple
 inputs  outputs I think it's important to maintain also connections inside 
 the
 entity. Say, we have an entity with two inputs and two outputs. If one output
 is enabled, which one of the inputs needs to be enabled and configured also?
 The current model doesn't give any solution to that.
 
 I haven't implemented this, as in my use cases I have just single inputs and
 outputs, so I can follow the pipeline trivially.
 
 Central entity
 ==
 
 If I understand the RFC correctly, there's a central entity that manages all
 other entities connected to it. This central entity would normally be the
 display controller. I don't like this model, as it makes it difficult or
 impossible to manage situations where an entity is connected to two display
 controllers (even if only one of the display controllers would be connected at
 a time). It also makes this one display entity fundamentally different from 
 the
 others, which I don't like.
 
 I think all the display entities should be similar. They would all register
 themselves to the CDF framework, which in turn would be used by somebody. This
 somebody could be the display controller driver, which is more or less how 
 I've
 implemented it.
 
 Media entity/pads
 =
 
 Using 

Re: [PATCH/RFC v3 00/19] Common Display Framework

2013-10-02 Thread Tomi Valkeinen
Hi Andrzej,

On 02/10/13 15:23, Andrzej Hajda wrote:

 Using Linux buses for DBI/DSI
 =

 I still don't see how it would work. I've covered this multiple times in
 previous posts so I'm not going into more details now.

 I implemented DSI (just command mode for now) as a video bus but with bunch 
 of
 extra ops for sending the control messages.
 
 Could you post the list of ops you have to create.

I'd rather not post the ops I have in my prototype, as it's still a
total hack. However, they are very much based on the current OMAP DSS's
ops, so I'll describe them below. I hope I find time to polish my CDF
hacks more, so that I can publish them.

 I have posted some time ago my implementation of DSI bus:
 http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure/69358/focus=69362

A note about the DT data on your series, as I've been stuggling to
figure out the DT data for OMAP: some of the DT properties look like
configuration, not hardware description. For example,
samsung,bta-timeout doesn't describe hardware.

 I needed three quite generic ops to make it working:
 - set_power(on/off),
 - set_stream(on/off),
 - transfer(dsi_transaction_type, tx_buf, tx_len, rx_buf, rx_len)
 I have recently replaced set_power by PM_RUNTIME callbacks,
 but I had to add .initialize ops.

We have a bit more on omap:

http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/video/omapdss.h#n648

Some of those should be removed and some should be omap DSI's internal
matters, not part of the API. But it gives an idea of the ops we use.
Shortly about the ops:

- (dis)connect, which might be similar to your initialize. connect is
meant to connect the pipeline, reserving the video ports used, etc.

- enable/disable, enable the DSI bus. If the DSI peripheral requires a
continous DSI clock, it's also started at this point.

- set_config configures the DSI bus (like, command/video mode, etc.).

- configure_pins can be ignored, I think that function is not needed.

- enable_hs and enable_te, used to enable/disable HS mode and
tearing-elimination

- update, which does a single frame transfer

- bus_lock/unlock can be ignored

- enable_video_output starts the video stream, when using DSI video mode

- the request_vc, set_vc_id, release_vc can be ignored

- Bunch of transfer funcs. Perhaps a single func could be used, as you
do. We have sync write funcs, which do a BTA at the end of the write and
wait for reply, and nosync version, which just pushes the packet to the
TX buffers.

- bta_sync, which sends a BTA and waits for the peripheral to reply

- set_max_rx_packet_size, used to configure the max rx packet size.

 Regarding the discussion how and where to implement control bus I have
 though about different alternatives:
 1. Implement DSI-master as a parent dev which will create DSI-slave
 platform dev in a similar way as for MFD devices (ssbi.c seems to me a
 good example).
 2. Create universal mipi-display-bus which will cover DSI, DBI and
 possibly other buses - they have have few common things - for example
 MIPI-DCS commands.
 
 I am not really convinced to either solution all have some advantages
 and disadvantages.

I think a dedicated DSI bus and your alternatives all have the same
issues with splitting the DSI control into two. I've shared some of my
thoughts here:

http://article.gmane.org/gmane.comp.video.dri.devel/90651
http://article.gmane.org/gmane.comp.video.dri.devel/91269
http://article.gmane.org/gmane.comp.video.dri.devel/91272

I still think that it's best to consider DSI and DBI as a video bus (not
as a separate video bus and a control bus), and provide the packet
transfer methods as part of the video ops.

 Tomi




signature.asc
Description: OpenPGP digital signature


[PATCH/RFC v3 00/19] Common Display Framework

2013-08-09 Thread Laurent Pinchart
Hi everybody,

Here's the third RFC of the Common Display Framework. This is a resent, the
series I've sent earlier seems not to have made it to the vger mailing lists,
possibly due to a too long list of CCs (the other explanation being that CDF
has been delayed for so long that vger considers it as spam, which I really
hope isn't the case :-)). I've thus dropped the CCs, sorry about that.

I won't repeat all the background information from the versions one and two
here, you can read it at http://lwn.net/Articles/512363/ and
http://lwn.net/Articles/526965/.

This RFC isn't final. Given the high interest in CDF and the urgent tasks that
kept delaying the next version of the patch set, I've decided to release v3
before completing all parts of the implementation. Known missing items are

- documentation: kerneldoc and this cover letter should provide basic
  information, more extensive documentation will likely make it to v4.

- pipeline configuration and control: generic code to configure and control
  display pipelines (in a nutshell, translating high-level mode setting and
  DPMS calls to low-level entity operations) is missing. Video and stream
  control operations have been carried over from v2, but will need to be
  revised for v4.

- DSI support: I still have no DSI hardware I can easily test the code on.

Special thanks go to

- Renesas for inviting me to LinuxCon Japan 2013 where I had the opportunity
  to validate the CDF v3 concepts with Alexandre Courbot (NVidia) and Tomasz
  Figa (Samsung).

- Tomi Valkeinen (TI) for taking the time to deeply brainstorm v3 with me.

- Linaro for inviting me to Linaro Connect Europe 2013, the discussions we had
  there greatly helped moving CDF forward.

- And of course all the developers who showed interest in CDF and spent time
  sharing ideas, reviewing patches and testing code.

I have to confess I was a bit lost and discouraged after all the CDF-related
meetings during which we have discussed how to move from v2 to v3. With every
meeting I was hoping to run the implementation through use cases of various
interesting parties and narrow down the scope of the huge fuzzy beast that CDF
was. With every meeting the scope actually broadened, with no clear path at
sight anywhere.

Earlier this year I was about to drop one of the requirements on which I had
based CDF v2: sharing drivers between DRM/KMS and V4L2. With only two HDMI
transmitters as use cases for that feature (with only out-of-tree drivers so
far), I just thought the involved completely wasn't worth it and that I should
implement CDF v3 as a DRM/KMS-only helper framework. However, a seemingly
unrelated discussion with Xilinx developers showed me that hybrid SoC-FPGA
platforms such as the Xilinx Zynq 7000 have a larger library of IP cores that
can be used in camera capture pipelines and in display pipelines. The two use
cases suddenly became tens or even hundreds of use cases that I couldn't
ignore anymore.

CDF v3 is thus userspace API agnostic. It isn't tied to DRM/KMS or V4L2 and
can be used by any kernel subsystem, potentially including FBDEV (although I
won't personally wrote FBDEV support code, as I've already advocated for FBDEV
to be deprecated).

The code you are about to read is based on the concept of display entities
introduced in v2. Diagrams related to the explanations below are available at
http://ideasonboard.org/media/cdf/20130709-lce-cdf.pdf.


Display Entities


A display entity abstracts any hardware block that sources, processes or sinks
display-related video streams. It offers an abstract API, implemented by display
entity drivers, that is used by master drivers (such as the main display driver)
to query, configure and control display pipelines.

Display entities are connected to at least one video data bus, and optionally
to a control bus. The video data busses carry display-related video data out
of sources (such as a CRTC in a display controller) to sinks (such as a panel
or a monitor), optionally going through transmitters, encoders, decoders,
bridges or other similar devices. A CRTC or a panel will usually be connected
to a single data bus, while an encoder or a transmitter will be connected to
two data busses.

The simple linear display pipelines we find in most embedded platforms at the
moment are expected to grow more complex with time. CDF needs to accomodate
those needs from the start to be, if not future-proof, at least present-proof
at the time it will get merged in to mainline. For this reason display
entities have data ports through which video streams flow in or out, with link
objects representing the connections between those ports. A typical entity in
a linear display pipeline will have one (for video source and video sink
entities such as CRTCs or panels) or two ports (for video processing entities
such as encoders), but more ports are allowed, and entities can be linked in
complex non-linear pipelines.

Readers might think that this model if