Re: [PATCH] video: treat signal like timeout as failure

2015-03-10 Thread Tomi Valkeinen
On 10/03/15 16:46, Russell King - ARM Linux wrote:

> In which case, let me propose that the exynos fbdev driver needs to be
> moved to drivers/staging, and stay there until this stuff gets fixed.
> drivers/staging is supposed to be for stuff which isn't up to the mark,
> and which is potentially unstable.  And that's what this driver exactly
> is.

There is drivers/gpu/drm/exynos/ which is getting a lot of updates. So...

I'd propose removing the exynos fbdev driver if the exynos drm driver
offers the same functionality. I don't know if that's the case. Does the
drm driver support all the devices the fbdev supports?

Also, I'm not sure if and how we can remove drivers. If exynos fbdev
driver is dropped, that would perhaps break boards that have exynos
fbdev in their .dts file. And if the drm driver doesn't offer the exact
same /dev/fbX interface, it would break the userspace.

So I don't know if that's possible. But that's what I'd like to do,
eventually, for all the fbdev drivers. Implement drm driver, remove the
fbdev one.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH] video: treat signal like timeout as failure

2015-03-10 Thread Tomi Valkeinen
On 20/01/15 07:23, Nicholas Mc Guire wrote:
> if(!wait_for_completion_interruptible_timeout(...))
> only handles the timeout case - this patch adds handling the
> signal case the same as timeout and cleans up.
> 
> Signed-off-by: Nicholas Mc Guire 
> ---
> 
> Only the timeout case was being handled, return of 0 in 
> wait_for_completion_interruptible_timeout, the signal case (-ERESTARTSYS)
> was treated just like the case of successful completion, which is most 
> likely not reasonable.
> 
> Note that exynos_mipi_dsi_wr_data/exynos_mipi_dsi_rd_data return values
> are not checked at the call sites in s6e8ax0.c (cmd_read/cmd_write)!
> 
> This patch simply treats the signal case the same way as the timeout case,
> by releasing locks and returning 0 - which might not be the right thing to
> do - this needs a review by someone knowing the details of this driver.

While I agree that this patch is a bit better than the current state,
the code still looks wrong as Russell said.

I can merge this, but I'd rather have someone from Samsung look at the
code and change it to use wait_for_completion_killable_timeout() if
that's what this code is really supposed to use.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH] video: treat signal like timeout as failure

2015-01-26 Thread Tomi Valkeinen
Hi,

On 20/01/15 07:23, Nicholas Mc Guire wrote:
> if(!wait_for_completion_interruptible_timeout(...))
> only handles the timeout case - this patch adds handling the
> signal case the same as timeout and cleans up.
> 
> Signed-off-by: Nicholas Mc Guire 
> ---
> 
> Only the timeout case was being handled, return of 0 in 
> wait_for_completion_interruptible_timeout, the signal case (-ERESTARTSYS)
> was treated just like the case of successful completion, which is most 
> likely not reasonable.
> 
> Note that exynos_mipi_dsi_wr_data/exynos_mipi_dsi_rd_data return values
> are not checked at the call sites in s6e8ax0.c (cmd_read/cmd_write)!
> 
> This patch simply treats the signal case the same way as the timeout case,
> by releasing locks and returning 0 - which might not be the right thing to
> do - this needs a review by someone knowing the details of this driver.

The code changes look ok to me, but again you have detailed descriptions
above which are not in the patch description. All the above looks like
something that should be in the patch description.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH] i2c: exynos5: Move initialization code to subsys_initcall()

2015-01-13 Thread Tomi Valkeinen
On 12/01/15 10:43, Joonyoung Shim wrote:
> +Cc Tomi Valkeinen,
> 
> Hi Uwe,
> 
> On 01/12/2015 04:50 PM, Uwe Kleine-König wrote:
>> Hello,
>>
>> On Mon, Jan 12, 2015 at 11:53:02AM +0900, Joonyoung Shim wrote:
>>> This is required in order to ensure that core system devices such as
>>> voltage regulators attached via I2C are available early in boot.
>> Deferred probing isn't an option? If so I suggest adding the reasoning
>> in a comment to stop the next person converting it to that.
>> (And if not, please fix accordingly to use deferred probing.)
>>
> 
> I couldn't get penguin logo since the commit 92b004d ("video/logo:
> prevent use of logos after they have been freed") and i just tried old
> way because i missed the flow to move to deferred probing.
> 
> Fb driver probe seems to be ran between fb_logo_late_init late_initcall
> and the freeing of the logos.
> 
> Any ideas?

Thierry mentioned on IRC that he encountered the same issue. And I
encountered it also.

So... I'd rather not revert the fix, as it's quite a nasty one, and it
happens while console lock is held, so it looks like the machine just
froze. But I don't know how it could be improved with the current kernel.

We could make the logos non-initdata, but I don't much like that option.
Or we could perhaps implement some new way to catch the freeing of initdata.

Any other ideas?

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: Weird/Unneeded call to msleep in exynos_mipi_dsi_wr_data in exynos_mipi_dsi_common.c

2014-12-18 Thread Tomi Valkeinen
On 18/12/14 15:48, nick wrote:
> Lucas,
> That's fair do you known of anyone who does have the hardware so we can test 
> my patch. If you do then we can get this fixed rather
> easily.
> Cheers Nick 
> 
> On 2014-12-18 08:39 AM, Lucas Stach wrote:
>> Am Donnerstag, den 18.12.2014, 08:35 -0500 schrieb nick:
>>> Krzysztof,
>>> If we look at the code for this function, it already is handling the data 
>>> correctly. In addition  the locks 
>>> seem to be better protection then msleep. Further more is no reason for 
>>> this delay as we are neither resetting 
>>> the hardware or waiting for the hardware here so why is it needed? I don't 
>>> have Exynos based hardware lying
>>> around through to test it.
>>
>> If you can't test it, don't touch it. It's that simple.

There seems to be multiple msleep(20)s in exynos_mipi_dsi_common.c, a
few with /* FIXME */ and a few without any comments. Looks like bad (but
relatively harmless) code to me, but as Lucas said, if you can't test
it, don't touch it.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH V8 00/14] drm/exynos: few patches to enhance bridge chip support

2014-12-15 Thread Tomi Valkeinen
On 12/12/14 11:51, Javier Martinez Canillas wrote:

> Tomi and Laurent,
> 
> You asked Ajay to change his series to use the video port and enpoints DT
> bindings instead of phandles, could you please review his latest version?

Looking only at the binding documentation patch, looks good to me.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH V7 11/12] Documentation: bridge: Add documentation for ps8622 DT properties

2014-10-07 Thread Tomi Valkeinen
On 20/09/14 14:22, Ajay kumar wrote:

> Well, I am okay with using video ports to describe the relationship
> between the encoder, bridge and the panel.
> But, its just that I need to make use of 2 functions when phandle
> does it using just one function ;)
> -panel_node = of_parse_phandle(dev->of_node, "panel", 0)
> +   endpoint = of_graph_get_next_endpoint(dev->of_node, NULL);
> +   if (!endpoint)
> +   return -EPROBE_DEFER;
> +
> +   panel_node = of_graph_get_remote_port_parent(endpoint);
> +   if (!panel_node)
> +   return -EPROBE_DEFER;
> 
> 
> If nobody else has objections over using of_graph functions instead
> of phandles, I can respin this patchset by making use of video ports.

The discussion did digress somewhat.

As a clarification, I'm in no way nack'ing this series because it
doesn't use the graphs for video connections. I don't see the simple
phandle bindings used here as broken as such.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH V7 11/12] Documentation: bridge: Add documentation for ps8622 DT properties

2014-10-07 Thread Tomi Valkeinen
On 07/10/14 10:23, Laurent Pinchart wrote:

>> You mean the bridge driver would somehow take a peek into panel1 and
>> panel2 nodes, looking for bridge specific properties? Sounds somewhat
>> fragile to me... How would the bridge driver know a property is for the
>> bridge?
> 
> No, I mean the bridge driver should call the API provided by the panel 
> objects 
> to get information about the panels, and compute its configuration parameters 
> from those. I'm not sure if that's possible though, it depends on whether the 
> bridge parameters can be computed from information provided by the panel.

Right. My example tried to show a case where they can't be queried. The
board or the wiring causes signal degradation, which can be fixed by
increasing the bridge output voltage slightly.

So it has nothing really to do with the panel, but the board.

I fully admit that it is a purely theoretical case, and I don't have any
real use cases in mind right now.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH V7 11/12] Documentation: bridge: Add documentation for ps8622 DT properties

2014-10-07 Thread Tomi Valkeinen
On 06/10/14 17:40, Laurent Pinchart wrote:

>> But seriously speaking, I was thinking about this. I'd really like to
>> have a generic video-mux node, that would still somehow allow us to have
>> device specific configurations for the video sources and sinks (which
>> the endpoints provide us), without endpoints.
>>
>> The reason endpoints don't feel right in this case is that it makes
>> sense that the mux has a single input and two outputs, but with
>> endpoints we need two endpoints from the bridge (which is still ok), but
>> we also need two endpoitns in the mux's input side, which doesn't feel
>> right.
>>
>> I came up with the following. It's quite ugly, though. I hope someone
>> has ideas how it could be done in a neater way:
>>
>> bridge1234 {
>>  bridge1234-cfg1: config1 {
>>  high-voltage;
>>  };
>>
>>  bridge1234-cfg2: config2 {
>>  low-voltage;
>>  };
>>
>>  output = <&mux>;
>> };
>>
>> mux: video-gpio-mux {
>>  gpio = <123>;
>>
>>  outputs = <&panel1 &panel2>;
>>  input-configs = <&bridge1234-cfg1 &bridge1234-cfg2>;
>> };
>>
>> panel1: panel-foo {
>>
>> };
>>
>> panel2: panel-foo {
>>
>> };
>>
>> So the bridge node has configuration sets. These might be compared to
>> pinmux nodes. And the mux has a list of input-configs. When panel1 is to
>> be enabled, "bridge1234-cfg1" config becomes active, and the bridge is
>> given this configuration.
>>
>> I have to say the endpoint system feels cleaner than the above, but
>> perhaps it's possible to improve the method above somehow.
> 
> Isn't it possible for the bridge to compute its configuration at runtime by 
> querying properties of the downstream video entities ? That would solve the 
> problem neatly.

You mean the bridge driver would somehow take a peek into panel1 and
panel2 nodes, looking for bridge specific properties? Sounds somewhat
fragile to me... How would the bridge driver know a property is for the
bridge?

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH V7 11/12] Documentation: bridge: Add documentation for ps8622 DT properties

2014-10-06 Thread Tomi Valkeinen
On 25/09/14 09:23, Thierry Reding wrote:

> How are cameras different? The CPU wants to capture video data from the
> camera, so it needs to go look for a video capture device, which in turn
> needs to involve a sensor.

Let's say we have an XXX-to-YYY encoder. We use that encoder to convert
the SoC's XXX output to YYY, which is then shown on a panel. So, in this
case, the encoder's DT node will have a "panel" or "output" phandle,
pointing to the panel.

We then use the exact same encoder in a setup in which we have a camera
which outputs XXX, which the encoder then converts to YYY, which is then
captured by the SoC. Here the "output" phandle would point to the SoC.

>>> If you go the other way around, how do you detect how things connect?
>>> Where do you get the information about the panel so you can trace back
>>> to the origin?
>>
>> When the panel driver probes, it registers itself as a panel and gets
>> its video source. Similarly a bridge in between gets its video source,
>> which often would be the SoC, i.e. the origin.
> 
> That sounds backwards to me. The device tree serves the purpose of
> supplementing missing information that can't be probed if hardware is
> too stupid. I guess that's one of the primary reasons for structuring it
> the way we do, from the CPU point of view, because it allows the CPU to
> probe via the device tree.
> 
> Probing is always done downstream, so you'd start by looking at some
> type of bus interface and query it for what devices are present on the
> bus. Take for example PCI: the CPU only needs to know how to access the
> host bridge and will then probe devices behind each of the ports on that
> bridge. Some of those devices will be bridges, too, so it will continue
> to probe down the hierarchy.
> 
> Without DT you don't have a means to know that there was a panel before
> you've actually gone and probed your whole hierarchy and found a GPU
> with some sort of output that a panel can be connected to. I think it
> makes a lot of sense to describe things in the same way in DT.

Maybe I don't quite follow, but it sounds to be you are mixing control
and data. For control, all you say is true. The CPU probes the devices
on control busses, either with the aid of HW or the aid of DT, going
downstream.

But the data paths are a different matter. The CPU/SoC may not even be
involved in the whole data path. You could have a sensor on the board
directly connected to a panel. Both are controlled by the CPU, but the
data path goes from the sensor to the panel (or vice versa). There's no
way the data paths can be "CPU centric" in that case.

Also, a difference with the data paths compared to control paths is that
they are not strictly needed for operation. An encoder can generate an
output without enabling its input (test pattern or maybe blank screen,
or maybe a screen with company logo). Or an encoder with two inputs
might only get the second input when the user requests a very high res
mode. So it is possible that the data paths are lazily initialized.

You do know that there is a panel right after the device is probed
according to its control bus. It doesn't mean that the data paths are
there yet. In some cases the user space needs to reconfigure the data
paths before a panel has an input and can be used to show an image from
the SoC's display subsystem.

The point here being that the data path bindings don't really relate to
the probing part. You can probe no matter which way the data path
bindings go, and no matter if there actually exists (yet) a probed
device on the other end of a data path phandle.

While I think having video data connections in DT either way, downstream
or upstream, would work, it has felt most natural for me to have the
phandles from video sinks to video sources.

The reason for that is that I think the video sink has to be in control
of its source. It's the sink that tells the source to start or stop or
reconfigure. So I have had need to get the video source from a video
sink, but I have never had the need to get the video sinks from video
sources.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH V7 11/12] Documentation: bridge: Add documentation for ps8622 DT properties

2014-09-24 Thread Tomi Valkeinen
On 23/09/14 17:58, Thierry Reding wrote:

>> But if a panel driver controls its video source, it makes sense for the
>> panel driver to get its video source in its probe, and that happens
>> easiest if the panel has a link to the video source.
> 
> That's an orthogonal problem. You speak about the link in software here,
> whereas the phandles only represent the link in the description of
> hardware. Now DT describes hardware from the perspective of the CPU, so
> it's sort of like a cave that you're trying to explore. You start at the
> top with the address bus, from where a couple of tunnels lead to the
> various peripherals on the address bus. A display controller might have
> another tunnel to a panel, etc.

We don't do that for GPIOs, regulators, etc. either. And for video data
there are no address buses. Yes, for DSI and similar we have a control
bus, but that is modeled differently than the video data path.

Now, I'm fine with starting from the CPU, going outwards. I agree that
it's probably better to model it in the direction of the data stream,
even if that would make the SW a bit more complex.

However, I do think it's not as clear as you make it sound, especially
if we take cameras/sensors into account as Laurent explained elsewhere
in this thread.

> If you go the other way around, how do you detect how things connect?
> Where do you get the information about the panel so you can trace back
> to the origin?

When the panel driver probes, it registers itself as a panel and gets
its video source. Similarly a bridge in between gets its video source,
which often would be the SoC, i.e. the origin.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH V7 11/12] Documentation: bridge: Add documentation for ps8622 DT properties

2014-09-24 Thread Tomi Valkeinen
On 23/09/14 17:45, Thierry Reding wrote:
> On Tue, Sep 23, 2014 at 02:31:35PM +0300, Tomi Valkeinen wrote:
>> On 23/09/14 12:39, Thierry Reding wrote:
>>
>>> My point is that if you use plain phandles you usually have the
>>> meta-data already. Referring to the above example, bridge0 knows that it
>>> should look for a bridge with phandle &bridge1, whereas bridge1 knows
>>> that the device it is connected to is a panel.
>>
>> The bridge should not care what kind of device is there on the other
>> end. The bridge just has an output, going to another video component.
> 
> You'll need to know at some point that one of the devices in a panel,
> otherwise you can't treat it like a panel.

The bridge doesn't need to treat it like a panel. Someone else might,
though. But the panel driver knows it is a panel, and can thus register
needed services, or mark itself as a panel.

>>>> Well, I can't say about this particular bridge, but afaik you can
>>>> connect a parallel RGB signal to multiple panels just like that, without
>>>> any muxing.
>>>
>>> Right, but in that case you're not reconfiguring the signal in any way
>>> for each of the panels. You send one single signal to all of them. For
>>
>> Yes, that's one use case, cloning. But I was not talking about that.
>>
>>> all intents and purposes there is only one panel. Well, I guess you
>>> could have separate backlights for the panels. In that case though it
>>> seems better to represent it at least as a virtual mux or bridge, or
>>> perhaps a "mux panel".
>>
>> I was talking about the case where you have two totally different
>> devices, let's say panels, connected to the same output. One could take
>> a 16-bit parallel RGB signal, the other 24-bit. Only one of them can  be
>> enabled at a time (from HW perspective both can be enabled at the same
>> time, but then the other one shows garbage).
> 
> So we're essentially back to a mux, albeit maybe a virtual one.

Yes. Are you suggesting to model virtual hardware in .dts? I got the
impression that you wanted to model only the real hardware in .dts =).

But seriously speaking, I was thinking about this. I'd really like to
have a generic video-mux node, that would still somehow allow us to have
device specific configurations for the video sources and sinks (which
the endpoints provide us), without endpoints.

The reason endpoints don't feel right in this case is that it makes
sense that the mux has a single input and two outputs, but with
endpoints we need two endpoints from the bridge (which is still ok), but
we also need two endpoitns in the mux's input side, which doesn't feel
right.

I came up with the following. It's quite ugly, though. I hope someone
has ideas how it could be done in a neater way:

bridge1234 {
bridge1234-cfg1: config1 {
high-voltage;
};

bridge1234-cfg2: config2 {
low-voltage;
};

output = <&mux>;
};

mux: video-gpio-mux {
gpio = <123>;

outputs = <&panel1 &panel2>;
input-configs = <&bridge1234-cfg1 &bridge1234-cfg2>;
};

panel1: panel-foo {

};

panel2: panel-foo {

};

So the bridge node has configuration sets. These might be compared to
pinmux nodes. And the mux has a list of input-configs. When panel1 is to
be enabled, "bridge1234-cfg1" config becomes active, and the bridge is
given this configuration.

I have to say the endpoint system feels cleaner than the above, but
perhaps it's possible to improve the method above somehow.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH V7 11/12] Documentation: bridge: Add documentation for ps8622 DT properties

2014-09-24 Thread Tomi Valkeinen
On 23/09/14 17:41, Thierry Reding wrote:
> On Tue, Sep 23, 2014 at 12:34:54PM +0200, Andrzej Hajda wrote:
>> On 09/23/2014 12:10 PM, Thierry Reding wrote:
>>> On Tue, Sep 23, 2014 at 11:43:47AM +0200, Andrzej Hajda wrote:
 On 09/23/2014 10:35 AM, Thierry Reding wrote:
>>> [...]
> But I agree that it would be nice to unify bridges and encoders more. It
> should be possible to make encoder always a bridge (or perhaps even
> replace encoders with bridges altogether). Then once you're out of the
> DRM device everything would be a bridge until you get to a panel.
 I agree that some sort of unification of bridges, (slave) encoders is a 
 good
 thing, this way we stay only with bridges and panels as receivers.
 But we will still have to deal with the code like:
 if (on the other end of the link is panel)
 do panel framework specific things
 else
 do bridge framework specific things

 The code in both branches usually does similar things but due to framework
 differences it is difficult to merge.
>>> That's because they are inherently different entities. You can perform
>>> operations on a panel that don't make sense for a bridge and vice-versa.
>>>
 Ideally it would be best to get rid of such constructs. For example by
 trying to
 make frameworks per interface type exposed by device (eg. video_sink) and
 not by device type (drm_bridge, drm_panel).
>>> But then you loose all information about the real type of device.
>> Driver knows type of its device anyway. Why should it know device
>> type of devices it is connected to? It is enough it knows how to talk to
>> other device.
>> Like in real world, why desktop PC should know it is connected to DVI
>> monitor or to
>> DVI/HDMI converter which is connected to TV?
> 
> TVs are much more standardized. There are things like DDC/CI which can
> be used to control the device. Or there's additional circuitry or
> control paths to change things like brightness. The same isn't true of
> panels.
> 
>>>  Or you
>>> have to create a common base class. And then you're still back to
>>> dealing with the two specific cases in many places, so the gain seems
>>> rather minimal.
>>
>> For example RGB panel and RGB/??? bridge have the same hardware input
>> interface.
>> Why shall they have different representation in kernel?
> 
> Because panels have different requirements than bridges. Panels for
> example require the backlight to be adjustable, bridges don't.

I agree that panels and bridges are different, but not from the video
source's point of view. A bridge/encoder just streams video data to the
next entity in the chain, according to the given configuration. It does
not matter if the next one is a panel or another bridge. The source
bridge doesn't care if the next entity has a backlight or not.

I don't know if we need a different representation for bridges and
panels. Thinking about backlight, the driver can just register the
backlight device if it needs one. There's no need to differentiate
bridges and panels for that. But maybe there are other reasons that
warrant different representations.

However, my current feeling is that there's no need for different
representations.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH V7 11/12] Documentation: bridge: Add documentation for ps8622 DT properties

2014-09-23 Thread Tomi Valkeinen
On 23/09/14 17:38, Thierry Reding wrote:
> On Tue, Sep 23, 2014 at 03:09:44PM +0300, Tomi Valkeinen wrote:
>> On 23/09/14 13:01, Thierry Reding wrote:
>>> On Tue, Sep 23, 2014 at 12:40:24PM +0300, Tomi Valkeinen wrote:
> [...]
>>>> What exactly is a bridge and what is an encoder? Those are DRM
>>>> constructs, aren't they?
>>>
>>> Yes. I think bridges are mostly a superset of encoders.
>>
>> Superset in what way? If I have a SiI9022 RGB-to-HDMI encoder embedded
>> into my SoC (i.e. a DRM encoder), or I have a board with a SiI9022
>> encoder on the board (i.e. a DRM bridge), what is different?
> 
> Superset in what they represent from a software point of view. As in:
> an encoder /is a/ bridge. Though they aren't currently implemented that
> way.

So they are equal, not superset? Or what does an encoder do that a
bridge doesn't?

>>>> As I see it, a video pipeline consist of a video source (display
>>>> controller usually), a chain of encoders (all of which may not be
>>>> strictly "encoders", they could be level shifters, muxes, ESD protection
>>>> devices or such), and either a display device like a panel or a
>>>> connector to outside world.
>>>
>>> Well, the panel itself is attached to a connector. The connector is
>>> really what userspace uses to control the output and indirectly the
>>> panel.
>>
>> Yep. That's also something that should be fixed. A connector SW entity
>> is fine when connecting an external monitor, but a panel directly
>> connected to the SoC does not have a connector, and it's the panel that
>> should be controlled from the userspace.
> 
> I disagree. A panel is usually also attached using some sort of
> connector. Maybe not an HDMI, VGA or DP connector, but still some sort
> of connector where often it can even be removed.

Yes, but a normal panel connector is really just an extension of wires.
There is no difference if you wire the panel directly to the video
source, or if there's a connector.

I think usually connectors to the external world are not quite as
simple, as there may be some protection components or such, but even if
there's nothing extra, they are still a clear component visible to
outside world. For HDMI you (sometimes) even need properties for the
connector, like source for +5V, i2c bus, hotplug pin.

But even if there are no extra properties like that, I still think it's
good to have a connector entity for a connector to outside world.
Whereas I don't see any need for such when the connector is internal.
That said, I don't see any reason to prevent that either.

> Panels are theoretically hot-pluggable, too, much in the same way that
> monitors are.

So are encoders, in the same way. I have a main board, with a video
connector. That goes to an encoder on display board, and that again has
a connector, to which the panel is connected.

I also have a panel that can be conneted directly to the main board's
video output.

>> But again, the legacy...
> 
> You've got to make the abstraction somewhere, and I'm quite surprised
> actually how well the DRM abstractions are working out. I mean we can
> support anything from HDMI to DP to eDP, DSI and LVDS with just the
> connector abstraction and it all just works. That makes it a pretty
> good abstraction in my book.

Yes, I agree it's good in the sense that it works. And much better than
fbdev, which has nothing. But it's not good in the sense that it's not
what I'd design if I could start from scratch.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH V7 11/12] Documentation: bridge: Add documentation for ps8622 DT properties

2014-09-23 Thread Tomi Valkeinen
On 23/09/14 17:29, Thierry Reding wrote:

>>> Trying to do this within the bridge's node directly has two flaws:
>>>
>>> 1) It violates the DT principle of describing hardware. The
>>>device itself does not know anything about multiple streams
>>>and deals only with a single input and output. You'd be
>>>describing a board specific aspect in the device binding.
>>
>> We _are_ describing board specific aspects in the board.dts file. That's
>> what it is about.
>>
>> If the board's hardware is such that the bridge's voltage level needs to
>> be higher when using panel0, that's very much describing hardware.
> 
> You misunderstood. It's describing a use-case rather than the device
> itself. You describe, *within the device node*, that it can be muxed
> whereas the device itself doesn't know anything about muxing. That's
> where the violation is. You require a board-integration issue to be
> described in the binding of a specific device.

The .dts files are not only about hard HW features. The .dts files are
full of configuration properties and such. So while the aim should be to
describe only the HW, I think it's well within the limits to describe
things like whether to enable/disable HW features for a particular output.

I guess there could be an external mux node, and that mux node could
contain HW specific properties for both the inputs and outputs of the
mux. The input and output drivers could fetch their features from that
mux's node.

But that sounds much worse, than having ports/endpoints and HW specific
properties there. (or actually, does it... it would simplify some
things, but how would the bridge driver get its properties...).

And I don't quite see your point. Presuming we have a property in the
bridge for, say, "higher-voltage-level", which with simple phandles
would be present in the main bridge node, and presuming that's ok. (Is
it in your opinion? That is describing configuration, not HW.)

If so, I don't think it's different than having two endpoints, each with
their own property. In that case we just have the wires from the single
output going into to destinations, and so we need to different places
for the property.

>>> We have a common helper already. It's called of_parse_phandle().
>>
>> Yes, but what is the name of the property, and where does it go? Does
>> the panel have a phandle to the bridge? Does the bridge have a phandle
>> to the panel?
> 
> My point was that we don't need a common helper library if we have a way
> of getting at the phandle and resolve the phandle to a bridge or a
> panel. While I think standard names can be advantageous, using the
> regular of_parse_phandle() and a lookup function we don't even need
> them.

Well, we have also cases where the more complex video graphs are needed
(or, at least, are not too complex, so they could well be used). If so,
it'd be nice to have the driver manage the connections via a single API,
which would underneath manage both the video graphs and simple phandles,
depending on what is used in the .dts file.

 - study the connections before the drivers are loaded
>>>
>>> Why would you need that?
>>
>> I think this was discussed earlier, maybe Laurent remembers the use cases.
>>
>> I think one possibility here is to have an understanding of the
>> pipelines even if the modules have not been loaded or a single device
>> has failed to probe.
> 
> I don't see how that would be useful.

Ok. But it's still something that can be done with standardized
bindings, and cannot be done with non-standardized.

 - handle the connections anywhere else than the specific driver for the
 component
>>>
>>> Why should we do that?
>>
>> We could, for example, have a single component (common or SoC specific)
>> that manages the video pipelines. The drivers for the
>> encoders/bridges/panels would just probe the devices, without doing
>> anything else. The master component would then parse the links and
>> connect the devices in whatever way it sees best.
> 
> You don't need any of that. The SoC driver simply obtains a phandle to
> the first bridge connected to it. That bridge will already have done the
> same for any chained bridges. There's no need to connect them in any way
> anymore because they are already properly set up.

Yes, there are different ways to do this in the SW side. The point is,
with standard bindings we have many different options how to do it. With
non-standard bindings we don't.

If there are no strong reasons to use non-standard bindings, I don't see
why we would not standardize them.

>>> Having a standard representation only matters if you want to be able to
>>> parse the pipeline without knowing about the device specifics. But I'm
>>> not sure I understand why you would want to do that. This sounds like
>>> you'd want some generic code to be able to construct a pipeline. But at
>>> the same time you can't do anything meaningful with that pipeline
>>> because the generic code doesn't know ho

Re: [PATCH V7 11/12] Documentation: bridge: Add documentation for ps8622 DT properties

2014-09-23 Thread Tomi Valkeinen
On 23/09/14 13:01, Thierry Reding wrote:
> On Tue, Sep 23, 2014 at 12:40:24PM +0300, Tomi Valkeinen wrote:
>> On 23/09/14 11:35, Thierry Reding wrote:
>>
>>> Well, a display controller is never going to attach to a panel directly.
>>
>> With parallel RGB, that (almost) happens. There's voltage level shifting
>> probably in the middle, but I don't see anything else there.
> 
> The level shifting could be considered an encoder. Anyway, I didn't mean
> physically attach to a panel but rather in DRM. You'll always need an
> encoder and connector before you go to the panel.

"For now", you mean.

I'd rather see much more freedom there. If a display controller is
connected to a panel directly, with only non-controllable level shifters
in between (I don't even think those are strictly needed. why couldn't
there be a low-end soc that works with higher voltages?), I think we
should model it in the SW side by just having a device for the display
controller and a device for the panel. No artificial
encoders/bridges/connectors needed in between.

But I understand that for DRM legacy reasons that may never happen.

>>> But I agree that it would be nice to unify bridges and encoders more. It
>>> should be possible to make encoder always a bridge (or perhaps even
>>> replace encoders with bridges altogether). Then once you're out of the
>>> DRM device everything would be a bridge until you get to a panel.
>>
>> What exactly is a bridge and what is an encoder? Those are DRM
>> constructs, aren't they?
> 
> Yes. I think bridges are mostly a superset of encoders.

Superset in what way? If I have a SiI9022 RGB-to-HDMI encoder embedded
into my SoC (i.e. a DRM encoder), or I have a board with a SiI9022
encoder on the board (i.e. a DRM bridge), what is different?

>> As I see it, a video pipeline consist of a video source (display
>> controller usually), a chain of encoders (all of which may not be
>> strictly "encoders", they could be level shifters, muxes, ESD protection
>> devices or such), and either a display device like a panel or a
>> connector to outside world.
> 
> Well, the panel itself is attached to a connector. The connector is
> really what userspace uses to control the output and indirectly the
> panel.

Yep. That's also something that should be fixed. A connector SW entity
is fine when connecting an external monitor, but a panel directly
connected to the SoC does not have a connector, and it's the panel that
should be controlled from the userspace.

But again, the legacy...

>> Am I right that in DRM world the encoder is the first device in the
>> display chain after the display controller,
> 
> Yes.
> 
>> and the next is a bridge?
> 
> A bridge or a connector. Typically it would be a connector, but that's
> only if you don't have bridges in between.
> 
>> That sounds totally artificial, and I hope we don't reflect that in the
>> DT side in any way.
> 
> Yes, they are software concepts and I'm not aware of any of it being
> exposed in DT. A display controller is usually implemented as DRM CRTC
> object and outputs (DSI, eDP, HDMI) as encoder (often with a connector
> tied to it).

Ok. Thanks for the clarifications.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH V7 11/12] Documentation: bridge: Add documentation for ps8622 DT properties

2014-09-23 Thread Tomi Valkeinen
On 23/09/14 12:53, Thierry Reding wrote:

>> Yes, but in this case we know of existing boards that have complex
>> setups. It's not theoretical.
> 
> Complex setups involving /this particular/ bridge and binding are
> theoretical at this point, however.

Right, but this discussion, at least from my part, has not so much been
about this particular bridge, but bridges in general.

So I don't have any requirements for this bridge driver/bindings to
support complex use cases at the time being.

But I do want us to have an option to use the bridge in the future in
such complex case. And yes, we can't guarantee that we'd hit the
bindings right whatever we do, but we should think about it and at least
try.

>>> phandles should simply point to the next element in the pipeline and the
>>> OS abstractions should be good enough to handle the details about how to
>>> chain the elements.
>>
>> I, on the other hand, would rather see the links the other way around.
>> Panel having a link to the video source, etc.
> 
> Why? It seems much more natural to describe this using the natural flow
> of data. The device furthest away from the CPU should be the target of
> the phandle. That way the DT can be used to trace the flow of data down
> the pipeline.

Because I see video components "using" their video sources. A panel uses
an encoder to produce video data for the panel. An encoder uses its
video source to produce video data. A bit like a device uses a gpio, or
a pwm.

Especially with more complex encoders/panels having the panel/encoder in
control of its video source is often the easiest (only?) way to manage
the job. This has worked well on OMAP, whereas the earlier model where
the video source controlled the video sink was full of problems. Not
that that exactly proves anything, but that's my experience, and I
didn't find any easy solutions when the video source was in control.

So while the video data flows from SoC to the panel, the control goes
the other way. Whether the DT links should model the video data or
control, I don't know. Both feel natural to me.

But if a panel driver controls its video source, it makes sense for the
panel driver to get its video source in its probe, and that happens
easiest if the panel has a link to the video source.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH V7 11/12] Documentation: bridge: Add documentation for ps8622 DT properties

2014-09-23 Thread Tomi Valkeinen
On 23/09/14 12:39, Thierry Reding wrote:

> My point is that if you use plain phandles you usually have the
> meta-data already. Referring to the above example, bridge0 knows that it
> should look for a bridge with phandle &bridge1, whereas bridge1 knows
> that the device it is connected to is a panel.

The bridge should not care what kind of device is there on the other
end. The bridge just has an output, going to another video component.

>> Well, I can't say about this particular bridge, but afaik you can
>> connect a parallel RGB signal to multiple panels just like that, without
>> any muxing.
> 
> Right, but in that case you're not reconfiguring the signal in any way
> for each of the panels. You send one single signal to all of them. For

Yes, that's one use case, cloning. But I was not talking about that.

> all intents and purposes there is only one panel. Well, I guess you
> could have separate backlights for the panels. In that case though it
> seems better to represent it at least as a virtual mux or bridge, or
> perhaps a "mux panel".

I was talking about the case where you have two totally different
devices, let's say panels, connected to the same output. One could take
a 16-bit parallel RGB signal, the other 24-bit. Only one of them can  be
enabled at a time (from HW perspective both can be enabled at the same
time, but then the other one shows garbage).

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH V7 11/12] Documentation: bridge: Add documentation for ps8622 DT properties

2014-09-23 Thread Tomi Valkeinen
On 23/09/14 12:28, Thierry Reding wrote:

>> But, for example, let's say that the board is designed in a way that for
>> panel0 the bridge needs to output a bit higher level voltages than for
>> panel1. That's not a property of the panel, so it can't be queried from
>> the panel.
>>
>> That feature (varying the voltage level) is specific to the bridge, and
>> thus the properties should be in the bridge's DT data. So we need two
>> different configurations in the bridge, one for panel0 and one for
>> panel1. This is what endpoints give us.
> 
> You could get the same by moving the mux in front of the bridge instead.

I didn't understand. Moving the mux between the SoC and the bridge? How
would that solve the problem. Or what do you mean with "in the front of
the bridge"?

> Trying to do this within the bridge's node directly has two flaws:
> 
>   1) It violates the DT principle of describing hardware. The
>  device itself does not know anything about multiple streams
>  and deals only with a single input and output. You'd be
>  describing a board specific aspect in the device binding.

We _are_ describing board specific aspects in the board.dts file. That's
what it is about.

If the board's hardware is such that the bridge's voltage level needs to
be higher when using panel0, that's very much describing hardware.

>   2) Such a solution would have to be implemented for all bridge
>  devices that can potentially be muxed (really all of them).
>  If you describe it accurately in a separate mux node you get
>  genericity for free and it will work for all bridges.

I do agree that a generic mux is better if there's nothing special to be
configured. But how do you use a generic mux node for bridge device
specific features?

Well, I think it'd be great to have all bindings use ports/endpoints (or
a standard simpler representation), and have good helpers for those.
Then, the bridge driver would not need to know or care about endpoints
as such, it would just be told that this port & endpoint should be
enabled, and the bridge driver would get its configuration for that
endpoint, which it would program to the HW. And the same would happen
with or without complex video graph.

But that's not strictly required, the bridge driver could as well
support only single configuration. When someone uses the bridge in a
setup that requires multiple endpoints, he can then extend the driver to
support multiple endpoints.

>> I disagree. If we don't have a common way to describe the connections
>> between video devices, we cannot:
>>
>> - have a common helper library to parse the connections
> 
> We have a common helper already. It's called of_parse_phandle().

Yes, but what is the name of the property, and where does it go? Does
the panel have a phandle to the bridge? Does the bridge have a phandle
to the panel?

>> - study the connections before the drivers are loaded
> 
> Why would you need that?

I think this was discussed earlier, maybe Laurent remembers the use cases.

I think one possibility here is to have an understanding of the
pipelines even if the modules have not been loaded or a single device
has failed to probe.

>> - handle the connections anywhere else than the specific driver for the
>> component
> 
> Why should we do that?

We could, for example, have a single component (common or SoC specific)
that manages the video pipelines. The drivers for the
encoders/bridges/panels would just probe the devices, without doing
anything else. The master component would then parse the links and
connect the devices in whatever way it sees best.

>>> While there are probably legitimate cases where the video graph is
>>> useful and makes sense, in many cases terms like ports and endpoints are
>>> simply confusing.
>>
>> I again agree that I'd like to have a simpler representation than video
>> graphs for the simpler use cases. But again, I think it's important to
>> have a standard representation for those connections.
>>
>> Why do you think it wouldn't be good to have a standard for the simpler
>> connections (in addition to the video graphs)? What kind of device
>> specific variations do you see that would be needed?
> 
> What do you mean by standard? I agree that it would make sense to give
> properties standard names, but I don't think we need to go any further.

Standard names and standard direction. I don't think there's anything
else to simple phandles.

> I mean, in the case of a simple phandle there's not much more to it
> anyway. But I fail to understand why standard properties should be a
> hard requirement.

And I fail to see why they should not be a hard requirement =).

> Having a standard representation only matters if you want to be able to
> parse the pipeline without knowing about the device specifics. But I'm
> not sure I understand why you would want to do that. This sounds like
> you'd want some generic code to be able to construct a pipeline. But at
> th

Re: [PATCH V7 11/12] Documentation: bridge: Add documentation for ps8622 DT properties

2014-09-23 Thread Tomi Valkeinen
On 23/09/14 11:35, Thierry Reding wrote:

> Well, a display controller is never going to attach to a panel directly.

With parallel RGB, that (almost) happens. There's voltage level shifting
probably in the middle, but I don't see anything else there.

> But I agree that it would be nice to unify bridges and encoders more. It
> should be possible to make encoder always a bridge (or perhaps even
> replace encoders with bridges altogether). Then once you're out of the
> DRM device everything would be a bridge until you get to a panel.

What exactly is a bridge and what is an encoder? Those are DRM
constructs, aren't they?

As I see it, a video pipeline consist of a video source (display
controller usually), a chain of encoders (all of which may not be
strictly "encoders", they could be level shifters, muxes, ESD protection
devices or such), and either a display device like a panel or a
connector to outside world.

Am I right that in DRM world the encoder is the first device in the
display chain after the display controller, and the next is a bridge?
That sounds totally artificial, and I hope we don't reflect that in the
DT side in any way.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH V7 11/12] Documentation: bridge: Add documentation for ps8622 DT properties

2014-09-23 Thread Tomi Valkeinen
On 23/09/14 09:21, Thierry Reding wrote:

>> Well, I can write almost any kind of bindings, and then evidently my
>> device would work. For me, on my board.
> 
> Well, that's the whole problem with DT. For many devices we only have a
> single setup to test against. And even when we have several they often
> are derived from each other. But the alternative would be to defer
> (possibly indefinitely) merging support for a device until a second,
> wildly different setup shows up. That's completely unreasonable and we
> need to start somewhere.

Yes, but in this case we know of existing boards that have complex
setups. It's not theoretical.

I'm not saying we should stop everything until we have a 100% solution
for the rare complex cases. But we should keep them in mind and, when
possible, solve problems in a way that will work for the complex cases also.

>> I guess non-video devices haven't had need for those. I have had lots of
>> boards with video setup that cannot be represented with simple phandles.
>> I'm not sure if I have just been unlucky or what, but my understand is
>> that other people have encountered such boards also. Usually the
>> problems encountered there have been circumvented with some hacky video
>> driver for that specific board, or maybe a static configuration handled
>> by the boot loader.
> 
> I have yet to encounter such a setup. Can you point me at a DTS for one
> such setup? I do remember a couple of hypothetical cases being discussed
> at one time or another, but I haven't seen any actual DTS content where
> this was needed.

No, I can't point to them as they are not in the mainline (at least the
ones I've been working on), for obvious reasons.

With a quick glance, I have the following devices in my cabinet that
have more complex setups: OMAP 4430 SDP, BeagleBoneBlack + LCD, AM43xx
EVM. Many Nokia devices used to have such setups, usually so that the
LCD and tv-out were connected to the same video source.

>> Do we have a standard way of representing the video pipeline with simple
>> phandles? Or does everyone just do their own version? If there's no
>> standard way, it sounds it'll be a mess to support in the future.
> 
> It doesn't matter all that much whether the representation is standard.

Again, I disagree.

> phandles should simply point to the next element in the pipeline and the
> OS abstractions should be good enough to handle the details about how to
> chain the elements.

I, on the other hand, would rather see the links the other way around.
Panel having a link to the video source, etc.

The video graphs have two-way links, which of course is the safest
options, but also more verbose and redundant.

When this was discussed earlier, it was unclear which way the links
should be. It's true that only links to one direction are strictly
needed, but the question raised was that if in the drivers we end up
always going the links the other way, the performance penalty may be
somewhat big. (If I recall right).

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH V7 11/12] Documentation: bridge: Add documentation for ps8622 DT properties

2014-09-23 Thread Tomi Valkeinen
On 23/09/14 09:04, Thierry Reding wrote:

> I certainly agree that it's useful to have standard ways to describe at
> least various aspects. For example I think it would be useful to add
> standard properties for a bridge's connections, such as "bridge" or
> "panel" to allow bridge chaining and attaching them to panels.

I don't see a need for such properties. Do you have examples where they
would be needed?

The driver for the respective device does know if it's a bridge or a
panel, so that information is there as soon as the driver has loaded.

> But I think that should be the end of it. Mandating anything other than
> that will just complicate things and limit what people can do in the
> binding.
> 
> One of the disadvantages of the video graph bindings is that they are
> overly vague in that they don't carry information about what type a
> device is. Bindings then have to require additional meta-data, at which
> point it's become far easier to describe things with a custom property
> that already provides context.

I don't see why the graphs and such metadata are connected in any way.
They are separate issues. If we need such metadata, it needs to be added
in any case. That is not related to the graphs.

>> Yes, there's always one active input and one output for this bridge.
>> What the video graphs would bring is to have the possibility to have
>> multiple inputs and outputs, of which a single ones could be active at a
>> time. The different inputs and outputs could even have different
>> settings required (say, first output requires this bit set, but when
>> using second output that bit must be cleared).
> 
> As discussed elsewhere this should be handled at a different level then.
> DT should describe the hardware and this particular bridge device simply
> doesn't have a means to connect more than a single input or more than a
> single output.

Well, I can't say about this particular bridge, but afaik you can
connect a parallel RGB signal to multiple panels just like that, without
any muxing.

If a mux is needed, I agree that there should be a device for that mux.
But we still need a way to have multiple different "configuration sets"
for the bridge, as I explained in the earlier mail.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH V7 11/12] Documentation: bridge: Add documentation for ps8622 DT properties

2014-09-23 Thread Tomi Valkeinen
On 23/09/14 08:53, Thierry Reding wrote:

>> Yes, it's true we need a mux there. But we still have the complication
>> that for panel0 we may need different ps8622 settings than for panel1.
> 
> Yes, and that's why the bridge should be querying the panel for the
> information it needs to determine the settings.

That only works if the settings to be queried are standard ones.

But, for example, let's say that the board is designed in a way that for
panel0 the bridge needs to output a bit higher level voltages than for
panel1. That's not a property of the panel, so it can't be queried from
the panel.

That feature (varying the voltage level) is specific to the bridge, and
thus the properties should be in the bridge's DT data. So we need two
different configurations in the bridge, one for panel0 and one for
panel1. This is what endpoints give us.

>> If that's the case, then I think we'd need to have two output endpoints
>> in ps8622, both going to the mux, and each having the settings for the
>> respective panel.
> 
> But we'd be lying in DT. It no longer describes the hardware properly.
> The device only has a single input and a single output with no means to
> mux anything. Hence the device tree shouldn't be faking multiple inputs
> or outputs.

No, a port is a single physical output. An endpoint is a logical entity
on that single physical output.

So a bridge with a single output has one output port, but it may have as
many endpoints on that port as needed. Only one endpoint of a port can
be active at a time.

> I don't think we need to have a common way to describe video devices. In
> my opinion DT bindings are much better if they are specifically tailored
> towards the device that they describe. We'll provide a driver for that
> device anyway, so we should be creating appropriate abstractions at the
> OS level to properly handle them.

I disagree. If we don't have a common way to describe the connections
between video devices, we cannot:

- have a common helper library to parse the connections
- study the connections before the drivers are loaded
- handle the connections anywhere else than the specific driver for the
component

> To stay with the example of the board/display, I'd think that the final
> component of the board DT would implement a bridge. The first component
> of the display DT would similarly implement a bridge. Now if we have a
> way of chaining bridges and controlling a chain of bridges, then there
> is no need for anything more complex than a plain phandle in a property
> from the board bridge to the display bridge.

Right, that works for many cases. Of course, nothing says there's a
bridge on the main board, it could be connected directly to the SoC.

>>> Whether this is described using a single phandle to the bridge or a more
>>> complicated graph is irrelevant. What matters is that you get a phandle
>>> to the bridge. The job of the operating system is to give drivers a way
>>> to resolve that phandle to some object and provide an API to access that
>>> object.
>>
>> I agree it's not relevant whether we use a simple phandle or complex
>> graph. What matter is that we have a standard way to express the video
>> paths, which everybody uses.
> 
> Not necessarily. Consistency is always good, but I think simplicity
> trumps consistency. What matters isn't how the phandle is referenced in
> the device tree, what matters is that it is referenced and that it makes
> sense in the context of the specific device. Anything else is the job of
> the OS.
> 
> While there are probably legitimate cases where the video graph is
> useful and makes sense, in many cases terms like ports and endpoints are
> simply confusing.

I again agree that I'd like to have a simpler representation than video
graphs for the simpler use cases. But again, I think it's important to
have a standard representation for those connections.

Why do you think it wouldn't be good to have a standard for the simpler
connections (in addition to the video graphs)? What kind of device
specific variations do you see that would be needed?

Even if the points I gave about why a common way to describe connections
is important would not be relevant today, it sounds much safer to have a
standard representation in the DT for the connections. I'd only go for
device specific custom descriptions if there's a very important reason
for that. And I can't come up with any reason.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH V7 11/12] Documentation: bridge: Add documentation for ps8622 DT properties

2014-09-22 Thread Tomi Valkeinen
On 20/09/14 18:27, Javier Martinez Canillas wrote:

> I see that Documentation/devicetree/bindings/video/ti,omap-dss.txt
> mentions that the Video Ports binding documentation is in
> Documentation/devicetree/bindings/video/video-ports.txt but I don't
> see that this file exists in the kernel [1]. I see though that is was
> included on your series adding DSS DT support [2].
> 
> Do you know what happened with this file?

Ah, I need to update the link. This describes the graph:

Documentation/devicetree/bindings/graph.txt

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH V7 11/12] Documentation: bridge: Add documentation for ps8622 DT properties

2014-09-22 Thread Tomi Valkeinen
On 22/09/14 11:26, Thierry Reding wrote:
> On Fri, Sep 19, 2014 at 05:28:37PM +0300, Tomi Valkeinen wrote:
>> On 19/09/14 16:59, Ajay kumar wrote:
>>
>>> I am not really able to understand, what's stopping us from using this
>>> bridge on a board with "complex" display connections. To use ps8622 driver,
>>> one needs to "attach" it to the DRM framework. For this, the DRM driver
>>
>> Remember that when we talk about DT bindings, there's no such thing as
>> DRM. We talk about hardware. The same bindings need to work on any
>> operating system.
>>
>>> would need the DT node for ps8622 bridge. For which I use a phandle.
>>
>> A complex one could be for example a case where you have two different
>> panels connected to ps8622, and you can switch between the two panels
>> with, say, a gpio. How do you present that with a simple phandle?
> 
> How do you represent that with a graph? Whether you describe it using a
> graph or a simple phandle you'll need additional nodes somewhere in
> between. Perhaps something like this:
> 
>   mux: ... {
>   compatible = "gpio-mux-bridge";
> 
>   gpio = <&gpio 42 GPIO_ACTIVE_HIGH>;
> 
>   panel@0 {
>   panel = <&panel0>;
>   };
> 
>   panel@1 {
>   panel = <&panel1>;
>   };
>   };
> 
>   ps8622: ... {
>   bridge = <&mux>;
>   };
> 
>   panel0: ... {
>   ...
>   };
> 
>   panel1: ... {
>   ...
>   };

Yes, it's true we need a mux there. But we still have the complication
that for panel0 we may need different ps8622 settings than for panel1.

If that's the case, then I think we'd need to have two output endpoints
in ps8622, both going to the mux, and each having the settings for the
respective panel.

>>> If some XYZ platform wishes to pick the DT node via a different method,
>>> they are always welcome to do it. Just because I am not specifying a
>>> video port/endpoint in the DT binding example, would it mean that platform
>>> cannot make use of ports in future? If that is the case, I can add something
>>
>> All the platforms share the same bindings for ps8622. If you now specify
>> that ps8622 bindings use a simple phandle, then anyone who uses ps8622
>> should support that.
>>
>> Of course the bindings can be extended in the future. In that case the
>> drivers need to support both the old and the new bindings, which is
>> always a hassle.
>>
>> Generally speaking, I sense that we have different views of how display
>> devices and drivers are structured. You say "If some XYZ platform wishes
>> to pick the DT node via a different method, they are always welcome to
>> do it.". This sounds to me that you see the connections between display
>> devices as something handled by a platform specific driver.
>>
>> I, on the other hand, see connections between display devices as common
>> properties.
>>
>> Say, we could have a display board, with a panel and an encoder and
>> maybe some other components, which takes parallel RGB as input. The same
>> display board could as well be connected to an OMAP board or to an
>> Exynos board.
>>
>> I think the exact same display-board.dtsi file, which describes the
>> devices and connections in the display board, should be usable on both
>> OMAP and Exynos platforms. This means we need to have a common way to
>> describe video devices, just as we have for other things.
> 
> A common way to describe devices in DT isn't going to give you that. The
> device tree is completely missing any information about how to access an
> extension board like that. The operating system defines the API by which
> the board can be accessed. I imagine that this would work by making the
> first component of the board a bridge of some sort and then every driver
> that supports that type of bridge (ideally just a generic drm_bridge)
> would also work with that display board.

I'm not sure I follow.

Obviously there needs to be board specific .dts file that brings the
board and the display board together. So, say, the display-board.dtsi
has a i2c touchscreen node, but the board.dts will tell which i2c bus
it's connected to.

Well, now as I wrote that, I wonder if that's possible... A node needs
to have a parent, and for i2c that must be the i2c master. Is that
something the DT overlays/fragments or such are supposed to handle?

But let's only thi

Re: [PATCH V7 11/12] Documentation: bridge: Add documentation for ps8622 DT properties

2014-09-22 Thread Tomi Valkeinen
On 22/09/14 11:06, Thierry Reding wrote:

>>> Why do we need a complex graph when it can be handled using a simple 
>>> phandle?
>>
>> Maybe in your case you can handle it with simple phandle. Can you
>> guarantee that it's enough for everyone, on all platforms?
> 
> Nobody can guarantee that. An interesting effect that DT ABI stability
> has had is that people now try to come up with overly generic concepts
> to describe devices in an attempt to make them more future-proof. I
> don't think we're doing ourselves a favour with that approach.

I kind of agree. However, there are boards that need a more complex
approach than just a simple phandle. They are nothing new.

So while it may be true that this specific encoder has never been used
in such a complex way, and there's a chance that it never will, my
comments are not really about this particular encoder.

What I want is a standard way to describe the video components. If we
have a standard complex one (video graphs) and a standard simple one
(simple phandles), it's ok for me.

>> The point of the ports/endpoint graph is to also support more
>> complicated scenarios.
> 
> But the scenario will always remain the same for this bridge. There will
> always be an input signal and a translation to some output signal along
> with some parameters to control how that translation happens. More
> complicated scenarios will likely need to be represented differently at
> a higher level than the bridge.

Yes, there's always one active input and one output for this bridge.
What the video graphs would bring is to have the possibility to have
multiple inputs and outputs, of which a single ones could be active at a
time. The different inputs and outputs could even have different
settings required (say, first output requires this bit set, but when
using second output that bit must be cleared).

>> If you now create ps8622 bindings that do not
>> support those graphs, the no one else using ps8622 can use
>> ports/endpoint graphs either.
> 
> That's not true. The binding can easily be extended in a backwards-
> compatible way later on (should it really prove necessary).

I hope you are right =). As I said in my earlier mail, my experience so
far has been different.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH V7 11/12] Documentation: bridge: Add documentation for ps8622 DT properties

2014-09-22 Thread Tomi Valkeinen
On 22/09/14 10:54, Thierry Reding wrote:

>> I wish all new display component bindings would use the video
>> ports/endpoints to describe the connections. It will be very difficult
>> to improve the display driver model later if we're missing such critical
>> pieces from the DT bindings.
> 
> I disagree. Why would we want to burden all devices with a bloated

I agree that the .dts becomes more bloated with video graph.

> binding and drivers with parsing a complex graph when it's not even

Hopefully not all drivers will need to do the parsing themselves. If
it's common stuff, I don't see why we can't have helpers to do the work.

> known that it will be necessary? Evidently this device works fine
> using the current binding. Just because there are bindings to describe

Well, I can write almost any kind of bindings, and then evidently my
device would work. For me, on my board.

> ports in a generic way doesn't mean it has to be applied everywhere.
> After all the concept of ports/endpoints applies to non-video devices
> too, yet we don't require the bindings for those devices to add ports
> or endpoints nodes.

I guess non-video devices haven't had need for those. I have had lots of
boards with video setup that cannot be represented with simple phandles.
I'm not sure if I have just been unlucky or what, but my understand is
that other people have encountered such boards also. Usually the
problems encountered there have been circumvented with some hacky video
driver for that specific board, or maybe a static configuration handled
by the boot loader.

> Also it won't be very difficult to extend the binding in a backwards
> compatible way if that becomes necessary.

I don't know about that. More or less all the cases I've encountered
where a binding has not been good from the start have been all but "not
very difficult".

Do we have a standard way of representing the video pipeline with simple
phandles? Or does everyone just do their own version? If there's no
standard way, it sounds it'll be a mess to support in the future.

If there's much objection to the bloatiness of video graphs, maybe we
could come up with an simpler alternative (like the phandles here), and
"standardize" it. Then, if common display framework or some such ever
realizes, we could say that if your driver uses CDF, you need to support
these video graph bindings and these simpler bindings to be compatible.

However, I do have a slight recollection that alternative
simplifications to the video graph were discussed at some point, and,
while I may remember wrong, I think it was not accepted. At least I had
one method to simplify the ports/endpoints, but that was removed from
the patches I had.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH V7 11/12] Documentation: bridge: Add documentation for ps8622 DT properties

2014-09-19 Thread Tomi Valkeinen
On 19/09/14 16:59, Ajay kumar wrote:

> I am not really able to understand, what's stopping us from using this
> bridge on a board with "complex" display connections. To use ps8622 driver,
> one needs to "attach" it to the DRM framework. For this, the DRM driver

Remember that when we talk about DT bindings, there's no such thing as
DRM. We talk about hardware. The same bindings need to work on any
operating system.

> would need the DT node for ps8622 bridge. For which I use a phandle.

A complex one could be for example a case where you have two different
panels connected to ps8622, and you can switch between the two panels
with, say, a gpio. How do you present that with a simple phandle?

In the exynos5420-peach-pit.dts, which you linked earlier, I see a
"panel" property in the ps8625 node. That's missing from the bindings in
this patch. Why is that? How is the panel linked in this version?

> If some XYZ platform wishes to pick the DT node via a different method,
> they are always welcome to do it. Just because I am not specifying a
> video port/endpoint in the DT binding example, would it mean that platform
> cannot make use of ports in future? If that is the case, I can add something

All the platforms share the same bindings for ps8622. If you now specify
that ps8622 bindings use a simple phandle, then anyone who uses ps8622
should support that.

Of course the bindings can be extended in the future. In that case the
drivers need to support both the old and the new bindings, which is
always a hassle.

Generally speaking, I sense that we have different views of how display
devices and drivers are structured. You say "If some XYZ platform wishes
to pick the DT node via a different method, they are always welcome to
do it.". This sounds to me that you see the connections between display
devices as something handled by a platform specific driver.

I, on the other hand, see connections between display devices as common
properties.

Say, we could have a display board, with a panel and an encoder and
maybe some other components, which takes parallel RGB as input. The same
display board could as well be connected to an OMAP board or to an
Exynos board.

I think the exact same display-board.dtsi file, which describes the
devices and connections in the display board, should be usable on both
OMAP and Exynos platforms. This means we need to have a common way to
describe video devices, just as we have for other things.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH V7 11/12] Documentation: bridge: Add documentation for ps8622 DT properties

2014-09-19 Thread Tomi Valkeinen
On 18/09/14 08:50, Ajay kumar wrote:

>>> Why do we need a complex graph when it can be handled using a simple 
>>> phandle?
>>
>> Maybe in your case you can handle it with simple phandle. Can you
>> guarantee that it's enough for everyone, on all platforms?
> Yes, as of now exynos5420-peach-pit and exynos5250-spring boards use
> this. In case of both, the phandle to bridge node is passed to the
> exynos_dp node.
> 
>> The point of the ports/endpoint graph is to also support more
>> complicated scenarios. If you now create ps8622 bindings that do not
>> support those graphs, the no one else using ps8622 can use
>> ports/endpoint graphs either.
>>
>> Btw, is there an example how the bridge with these bindings is used in a
>> board's .dts file? I couldn't find any example with a quick search. So
>> it's unclear to me what the "simple phandle" actually is.
> Please refer to the following link:
> https://git.kernel.org/cgit/linux/kernel/git/kgene/linux-samsung.git/tree/arch/arm/boot/dts/exynos5420-peach-pit.dts?id=samsung-dt#n129
> Let me know if you still think we would need to describe it as a complex 
> graph!

Yes, I think so. I'm not the DRM maintainer, though.

I think we have two options:

1) Describe the video component connections with the ports/endpoints
properly for all new display device bindings, and know that it's
(hopefully) future proof and covers even the more complex boards that
use the devices.

or

2) Use some simple methods to describe the links, like single phandle as
you do, knowing that we can't support more complex boards in the future.

I see some exynos boards already using the ports/endpoints, like
arch/arm/boot/dts/exynos4412-trats2.dts. Why not use it for all new
display devices?

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH V7 11/12] Documentation: bridge: Add documentation for ps8622 DT properties

2014-09-17 Thread Tomi Valkeinen
On 17/09/14 17:29, Ajay kumar wrote:
> Hi Tomi,
> 
> Thanks for your comments.
> 
> On Wed, Sep 17, 2014 at 5:22 PM, Tomi Valkeinen  wrote:
>> On 27/08/14 17:39, Ajay Kumar wrote:
>>> Add documentation for DT properties supported by ps8622/ps8625
>>> eDP-LVDS converter.
>>>
>>> Signed-off-by: Ajay Kumar 
>>> ---
>>>  .../devicetree/bindings/video/bridge/ps8622.txt|   20 
>>> 
>>>  1 file changed, 20 insertions(+)
>>>  create mode 100644 
>>> Documentation/devicetree/bindings/video/bridge/ps8622.txt
>>>
>>> diff --git a/Documentation/devicetree/bindings/video/bridge/ps8622.txt 
>>> b/Documentation/devicetree/bindings/video/bridge/ps8622.txt
>>> new file mode 100644
>>> index 000..0ec8172
>>> --- /dev/null
>>> +++ b/Documentation/devicetree/bindings/video/bridge/ps8622.txt
>>> @@ -0,0 +1,20 @@
>>> +ps8622-bridge bindings
>>> +
>>> +Required properties:
>>> + - compatible: "parade,ps8622" or "parade,ps8625"
>>> + - reg: first i2c address of the bridge
>>> + - sleep-gpios: OF device-tree gpio specification for PD_ pin.
>>> + - reset-gpios: OF device-tree gpio specification for RST_ pin.
>>> +
>>> +Optional properties:
>>> + - lane-count: number of DP lanes to use
>>> + - use-external-pwm: backlight will be controlled by an external PWM
>>
>> What does this mean? That the backlight support from ps8625 is not used?
>> If so, maybe "disable-pwm" or something?
> "use-external-pwm" or "disable-bridge-pwm" would be better.

Well, the properties are about the bridge. "use-external-pwm" means that
the bridge uses an external PWM, which, if I understood correctly, is
not what the property is about.

"disable-bridge-pwm" is ok, but the "bridge" there is extra. The
properties are about the bridge, so it's implicit.

>>> +
>>> +Example:
>>> + lvds-bridge@48 {
>>> + compatible = "parade,ps8622";
>>> + reg = <0x48>;
>>> + sleep-gpios = <&gpc3 6 1 0 0>;
>>> + reset-gpios = <&gpc3 1 1 0 0>;
>>> + lane-count = <1>;
>>> + };
>>>
>>
>> I wish all new display component bindings would use the video
>> ports/endpoints to describe the connections. It will be very difficult
>> to improve the display driver model later if we're missing such critical
>> pieces from the DT bindings.
> Why do we need a complex graph when it can be handled using a simple phandle?

Maybe in your case you can handle it with simple phandle. Can you
guarantee that it's enough for everyone, on all platforms?

The point of the ports/endpoint graph is to also support more
complicated scenarios. If you now create ps8622 bindings that do not
support those graphs, the no one else using ps8622 can use
ports/endpoint graphs either.

Btw, is there an example how the bridge with these bindings is used in a
board's .dts file? I couldn't find any example with a quick search. So
it's unclear to me what the "simple phandle" actually is.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH V7 11/12] Documentation: bridge: Add documentation for ps8622 DT properties

2014-09-17 Thread Tomi Valkeinen
On 27/08/14 17:39, Ajay Kumar wrote:
> Add documentation for DT properties supported by ps8622/ps8625
> eDP-LVDS converter.
> 
> Signed-off-by: Ajay Kumar 
> ---
>  .../devicetree/bindings/video/bridge/ps8622.txt|   20 
> 
>  1 file changed, 20 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/video/bridge/ps8622.txt
> 
> diff --git a/Documentation/devicetree/bindings/video/bridge/ps8622.txt 
> b/Documentation/devicetree/bindings/video/bridge/ps8622.txt
> new file mode 100644
> index 000..0ec8172
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/video/bridge/ps8622.txt
> @@ -0,0 +1,20 @@
> +ps8622-bridge bindings
> +
> +Required properties:
> + - compatible: "parade,ps8622" or "parade,ps8625"
> + - reg: first i2c address of the bridge
> + - sleep-gpios: OF device-tree gpio specification for PD_ pin.
> + - reset-gpios: OF device-tree gpio specification for RST_ pin.
> +
> +Optional properties:
> + - lane-count: number of DP lanes to use
> + - use-external-pwm: backlight will be controlled by an external PWM

What does this mean? That the backlight support from ps8625 is not used?
If so, maybe "disable-pwm" or something?

> +
> +Example:
> + lvds-bridge@48 {
> + compatible = "parade,ps8622";
> + reg = <0x48>;
> + sleep-gpios = <&gpc3 6 1 0 0>;
> + reset-gpios = <&gpc3 1 1 0 0>;
> + lane-count = <1>;
> + };
> 

I wish all new display component bindings would use the video
ports/endpoints to describe the connections. It will be very difficult
to improve the display driver model later if we're missing such critical
pieces from the DT bindings.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH v6 10/14] drm/panel: add S6E3FA0 driver

2014-07-30 Thread Tomi Valkeinen
On 22/07/14 10:49, Thierry Reding wrote:

> But what I was trying to say is that if the Read IDs command isn't an
> official DCS command, maybe it would be a better idea to use the DDB
> instead. I assume that even if it isn't the same information it would
> at least be a superset and therefore a suitable replacement.

Only if DDB commands work on that panel =). Even if a panel supports
DCS, it doesn't mean it supports all the commands.

Also, does it really matter which one to use inside a panel driver? I
don't really see any pros nor cons with either option. Except, of
course, if using one of those makes the driver's code simpler.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH v6 10/14] drm/panel: add S6E3FA0 driver

2014-07-30 Thread Tomi Valkeinen
On 22/07/14 06:41, YoungJun Cho wrote:

> Yes, this command is related with Nokia.
> 
> Last Friday, I met panel vendor guy for some issues.
> At the break time, I asked him for about this Read IDs(04H), why it is
> not included in DCS specification.
> He said that this command was originated by Nokia.
> In feature phone times, most of panel vendors wanted their panels to be
> used by Nokia and Nokia wanted this command, so most of panel vendors
> still provide this command traditionally.

This is my understanding also. I think the whole MIPI DCS spec
originated from Nokia's command set.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH 06/17] video: fbdev: s3c-fb: remove s5p64x0 related fimd codes

2014-06-30 Thread Tomi Valkeinen
On 01/07/14 00:32, Kukjin Kim wrote:
> This patch removes fimd codes for s5p6440 and s5p6450 SoCs.
> 
> Signed-off-by: Kukjin Kim 
> Cc: Tomi Valkeinen 
> ---
>  .../devicetree/bindings/video/samsung-fimd.txt |1 -
>  drivers/video/fbdev/Kconfig|2 +-
>  drivers/video/fbdev/s3c-fb.c   |   30 
> 
>  3 files changed, 1 insertion(+), 32 deletions(-)

Shall I queue this and patch 15/17 via fbdev tree, or are there
dependencies and the whole series should go as one?

If the latter, for this and 15/17:

Acked-by: Tomi Valkeinen 

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH] video: fbdev: s3c2410fb: Move to clk_prepare_enable/clk_disable_unprepare

2014-06-30 Thread Tomi Valkeinen
On 30/06/14 22:14, Vasily Khoruzhick wrote:
> Use clk_prepare_enable/clk_disable_unprepare to make the driver
> work properly with common clock framework.
> 
> Signed-off-by: Vasily Khoruzhick 
> ---
>  drivers/video/fbdev/s3c2410fb.c | 10 +-
>  1 file changed, 5 insertions(+), 5 deletions(-)

Thanks, queued for 3.17.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH] video: exynos: Add a dependency to the menu

2014-05-08 Thread Tomi Valkeinen
On 08/04/14 13:00, Jean Delvare wrote:
> All drivers under menu EXYNOS_VIDEO depend on either ARCH_S5PV210 or
> ARCH_EXYNOS, so add these as dependencies to the menu itself. This
> avoids presenting an empty and useless menu on other architectures.
> 
> Then drivers under the menu only need a dependency if they depend on
> one of the supported architectures specifically.
> 
> Signed-off-by: Jean Delvare 
> Cc: Jean-Christophe Plagniol-Villard 
> Cc: Tomi Valkeinen 
> Cc: Kukjin Kim 
> ---
>  drivers/video/exynos/Kconfig |2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> --- linux-3.15-rc0.orig/drivers/video/exynos/Kconfig  2014-04-07 
> 10:23:07.316226676 +0200
> +++ linux-3.15-rc0/drivers/video/exynos/Kconfig   2014-04-08 
> 11:50:17.406723187 +0200
> @@ -4,6 +4,7 @@
>  
>  menuconfig EXYNOS_VIDEO
>   bool "Exynos Video driver support"
> + depends on ARCH_S5PV210 || ARCH_EXYNOS
>   help
> This enables support for EXYNOS Video device.
>  
> @@ -15,7 +16,6 @@ if EXYNOS_VIDEO
>  
>  config EXYNOS_MIPI_DSI
>   bool "EXYNOS MIPI DSI driver support."
> - depends on ARCH_S5PV210 || ARCH_EXYNOS
>   select GENERIC_PHY
>   help
> This enables support for MIPI-DSI device.
> 
> 

I've applied this for 3.16.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH v2 19/21] ARM: dts: exynos5250-arndale: add dsi and panel nodes

2014-03-07 Thread Tomi Valkeinen
On 07/03/14 16:17, Andrzej Hajda wrote:
> On 03/07/2014 02:28 PM, Tomi Valkeinen wrote:
> (...)
>> There are many possible connections from FIMD, some of them:
>> FIMD ---> RGB panel, external
>> FIMD ---> DSI, on SoC
>> FIMD ---> eDP, on SoC
>> FIMD ---> ImageEnhacer, on SoC
>> This sounds similar to OMAP, at least roughly.
>>
>>> In the first case port should be created.
>>> In other cases connection could be determined by presence/absence
>>> of specific nodes, so in fact the port can be optional, almost like in
>>> my proposal :)
>> Well, I think not.
>>
>> In the external encoder case, the ports are there, and they are used.
>> You just didn't specify them, and thus make the driver deduce them from
>> the DT.
>>
>> In the FIMD case, if the the RGB port is needed, you need to specify it
>> in the DT data, and it's used. If you only need, say, DSI, the RGB port
>> is not used and thus it doesn't need to be present in the DT data.
>>
>> It's fine to leave the port definition out if it is not used at all.
> On Exynos, DSI is in fact RGB/DSI encoder (or I80/DSI). DSI and RGB pins
> are connected to the same FIMD output. So from FIMD point of view
> RGB port is used in both cases.

I guessed as much, as it's the same on OMAP.

But I think it's still fine to have a port only for RGB panel. For the
RGB panel, you have pins on the SoC to which the RGB data goes to. The
RGB port in DT represents these pins. You don't have pins for the
FIMD->DSI case.

But as I said, it's fine to use ports for internal connections also if
it works for you.

I still don't like the idea of having the port as optional in DT for
cases where the port comes from the parent device. Sure, it removes some
lines from the DT, but I think it makes it more confusing. Especially as
I think it's a very rare case where you could use that optional format,
as usually you will have some properties in the endpoint node.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH v2 19/21] ARM: dts: exynos5250-arndale: add dsi and panel nodes

2014-03-07 Thread Tomi Valkeinen
On 07/03/14 15:07, Andrzej Hajda wrote:
> On 03/07/2014 01:32 PM, Tomi Valkeinen wrote:
>> On 07/03/14 14:22, Andrzej Hajda wrote:
>>
>>> I think we should even extend the bindings to fimd:
>>> dsi {
>>> port@0 {
>>> dsi_0: endpoint {
>>> remote-endpoint=<&fimd_0>;
>>> }
>>> }
>>> port@1 {
>>> dsi_1: endpoint {
>>> remote-endpoint=<&lvds_0>;
>>> }
>>> }
>>> }
>>>
>>> fimd {
>>> port@0 {
>>> fimd_0: endpoint {
>>> remote-endpoint=<&dsi_0>;
>>> }
>>> }
>>> }
>> If both fimd and dsi are SoC components, I don't see any strict need for
>> that. I think the ports/endpoints are only important when dealing with
>> external components, which can be used on any platform. For SoC internal
>> components you can have relevant data directly in the drivers, as it is
>> fixed (for that SoC).
>>
>> Of course, if using ports for SoC internal components makes things
>> easier for you, I don't see any problems with it either.
> There are many possible connections from FIMD, some of them:
> FIMD ---> RGB panel, external
> FIMD ---> DSI, on SoC
> FIMD ---> eDP, on SoC
> FIMD ---> ImageEnhacer, on SoC

This sounds similar to OMAP, at least roughly.

> In the first case port should be created.
> In other cases connection could be determined by presence/absence
> of specific nodes, so in fact the port can be optional, almost like in
> my proposal :)

Well, I think not.

In the external encoder case, the ports are there, and they are used.
You just didn't specify them, and thus make the driver deduce them from
the DT.

In the FIMD case, if the the RGB port is needed, you need to specify it
in the DT data, and it's used. If you only need, say, DSI, the RGB port
is not used and thus it doesn't need to be present in the DT data.

It's fine to leave the port definition out if it is not used at all.

>> For OMAP, the SoC's display blocks are all inside one bigger DSS
>> "container", so I have not seen need to represent the connections
>> between the internal components in the DT data.
> How do you deal with situation when IPs in SoC can be connected in
> different ways ?

Basically so that (using exynos terms) if, say DSI panel is to be
enabled, the DSI panel driver will reserve the DSI master for itself,
and the DSI master will reserve the FIMD for itself, presuming FIMD has
not already been reserved. When the DSI panel is disabled, FIMD is freed.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH v2 19/21] ARM: dts: exynos5250-arndale: add dsi and panel nodes

2014-03-07 Thread Tomi Valkeinen
On 07/03/14 14:22, Andrzej Hajda wrote:

> I think we should even extend the bindings to fimd:
> dsi {
> port@0 {
> dsi_0: endpoint {
> remote-endpoint=<&fimd_0>;
> }
> }
> port@1 {
> dsi_1: endpoint {
> remote-endpoint=<&lvds_0>;
> }
> }
> }
> 
> fimd {
> port@0 {
> fimd_0: endpoint {
> remote-endpoint=<&dsi_0>;
> }
> }
> }

If both fimd and dsi are SoC components, I don't see any strict need for
that. I think the ports/endpoints are only important when dealing with
external components, which can be used on any platform. For SoC internal
components you can have relevant data directly in the drivers, as it is
fixed (for that SoC).

Of course, if using ports for SoC internal components makes things
easier for you, I don't see any problems with it either.

For OMAP, the SoC's display blocks are all inside one bigger DSS
"container", so I have not seen need to represent the connections
between the internal components in the DT data.

If the display components were truly independent IPs on the SoC, then
using ports might make things easier.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH v2 19/21] ARM: dts: exynos5250-arndale: add dsi and panel nodes

2014-03-04 Thread Tomi Valkeinen
On 04/03/14 14:00, Andrzej Hajda wrote:

> I have made video path binding optional, in case of video bus
> if the specific video path is not present driver uses the bus it is
> connected to.
> In case DSI panel is controlled via different bus the path should be
> specified
> explicitly.
> 
> I have no strong feelings against making this binding required but as
> you have
> stated above "in this case it's obvious" and for me quite redundant.

Yes, it's a good point. I didn't realize they were optional, I thought
they were not defined at all. I hadn't even thought that they could be
optional, I guess because in my uses there's always a reason to have
them. Even for a simple DSI command mode panel, I have information in
the DSI master's endpoint:

dsi {
...

dsi1_out_ep: endpoint {
remote-endpoint = <&lcd0_in>;
lanes = <0 1 2 3 4 5>;
};
};

which forces to use an endpoint in the panel also.

And, actually, I think in your case also you would need an endpoint for
the dsi master, if you wanted to support multiple endpoints. You
currently have, for example, PLL clock freq as the DSI master's
property, but it should really be a property of the DSI master's endpoint.

I.e. if you have two DSI panels connected to the same DSI master, in a
way that only one can be enabled at a time (there would probably be a
mux to select where the DSI lanes go, we have that on some development
boards), you could well need different clock frequencies for the panels.

> What is the gain in specifying explicitly video paths in such cases?

- Until we (everyone interested in this) have thought about it fully and
agreed that video paths in cases like this are optional, you would be
safer to specify them explicitly. That way your bindings are valid in
either case.

- Uniformity. If in most cases we need to specify the video paths,
either because the control path is different than the data path, or
because there is need for endpoint specific information, having video
paths as optional for some specific cases may needlessly complicate the
code.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH v2 19/21] ARM: dts: exynos5250-arndale: add dsi and panel nodes

2014-02-28 Thread Tomi Valkeinen
On 12/02/14 13:31, Andrzej Hajda wrote:
> The patch adds bridge and panel nodes.
> It adds also DSI properties specific for arndale board.
> 
> Signed-off-by: Andrzej Hajda 
> ---
>  arch/arm/boot/dts/exynos5250-arndale.dts | 39 
> 
>  1 file changed, 39 insertions(+)
> 
> diff --git a/arch/arm/boot/dts/exynos5250-arndale.dts 
> b/arch/arm/boot/dts/exynos5250-arndale.dts
> index a0a985d..7d666b1 100644
> --- a/arch/arm/boot/dts/exynos5250-arndale.dts
> +++ b/arch/arm/boot/dts/exynos5250-arndale.dts
> @@ -584,6 +584,45 @@
>   };
>   };
>  
> + panel: panel {
> + compatible = "boe,hv070wsa-100";
> + power-supply = <&vcc_3v3_reg>;
> + enable-gpios = <&gpd1 3 0>;
> + port {
> + panel_ep: endpoint {
> + remote-endpoint = <&bridge_out_ep>;
> + };
> + };
> + };
> +
> + dsi_0: dsi@1450 {
> + vddcore-supply = <&ldo8_reg>;
> + vddio-supply = <&ldo10_reg>;
> + samsung,pll-clock-frequency = <2400>;
> + samsung,burst-clock-frequency = <32000>;
> + samsung,esc-clock-frequency = <1000>;
> + status = "okay";
> +
> + bridge@0 {
> + reg = <0>;
> + compatible = "toshiba,tc358764";
> + vddc-supply = <&vcc_1v2_reg>;
> + vddio-supply = <&vcc_1v8_reg>;
> + vddmipi-supply = <&vcc_1v2_reg>;
> + vddlvds133-supply = <&vcc_3v3_reg>;
> + vddlvds112-supply = <&vcc_1v2_reg>;
> + reset-gpio = <&gpd1 6 1>;
> + #address-cells = <1>;
> + #size-cells = <0>;
> + port@1 {
> + reg = <1>;
> + bridge_out_ep: endpoint {
> + remote-endpoint = <&panel_ep>;
> + };
> + };
> + };
> + };

Compared to what I've done on OMAP, you don't seem to specify the video
inputs for the tc358764 at all. In this case it's obvious, as the chip
is a child of the DSI master. But the chip could as well be controlled
via i2c, and so be placed as a child of the i2c nodes.

So even if the driver doesn't use it, maybe it would be more future
proof to have both input and output endpoints for the tc358764?

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH v2 19/21] ARM: dts: exynos5250-arndale: add dsi and panel nodes

2014-02-28 Thread Tomi Valkeinen
On 28/02/14 15:31, Tomi Valkeinen wrote:

> Compared to what I've done on OMAP, you don't seem to specify the video
> inputs for the tc358764 at all. In this case it's obvious, as the chip
> is a child of the DSI master. But the chip could as well be controlled
> via i2c, and so be placed as a child of the i2c nodes.
> 
> So even if the driver doesn't use it, maybe it would be more future
> proof to have both input and output endpoints for the tc358764?

Oh, and one addition: how me and Laurent see the DSI case (and other
similar ones), the child/parent relationship gives the control bus path,
and the video ports give the video data path.

So both are always needed. A DSI panel may be controlled via DSI, i2c,
spi, but the video path will always go from DSI master to the panel.

Or, as a theoretical panel, you could have a DSI controlled panel, being
a child of the DSI master, but the video data would come via, say,
parallel RGB. You can actually do that with some panels/encoders, even
if the concept is silly.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH v2 14/21] ARM: dts: exynos4412-trats2: add panel node

2014-02-28 Thread Tomi Valkeinen
On 12/02/14 13:31, Andrzej Hajda wrote:
> The patch adds s6e8aa0 panel node for trats2.
> It adds also trats2 specific properties for DSI
> and regulator required by panel.
> 
> Signed-off-by: Andrzej Hajda 
> ---
>  arch/arm/boot/dts/exynos4412-trats2.dts | 47 
> +
>  1 file changed, 47 insertions(+)
> 
> diff --git a/arch/arm/boot/dts/exynos4412-trats2.dts 
> b/arch/arm/boot/dts/exynos4412-trats2.dts
> index fb7b9ae..0986d08 100644
> --- a/arch/arm/boot/dts/exynos4412-trats2.dts
> +++ b/arch/arm/boot/dts/exynos4412-trats2.dts
> @@ -442,6 +442,15 @@
>   };
>   };
>  
> + lcd_vdd3_reg: voltage-regulator@1 {
> + compatible = "regulator-fixed";
> + regulator-name = "LCD_VDD_2.2V";
> + regulator-min-microvolt = <220>;
> + regulator-max-microvolt = <220>;
> + gpio = <&gpc0 1 0>;
> + enable-active-high;
> + };
> +
>   sdhci@1251 {
>   bus-width = <8>;
>   non-removable;
> @@ -498,6 +507,44 @@
>   };
>   };
>  
> + dsi_0: dsi@11C8 {
> + vddcore-supply = <&ldo8_reg>;
> + vddio-supply = <&ldo10_reg>;
> + samsung,pll-clock-frequency = <2400>;
> + samsung,burst-clock-frequency = <5>;
> + samsung,esc-clock-frequency = <2000>;
> + status = "okay";
> +
> + panel@0 {
> + compatible = "samsung,s6e8aa0";
> + reg = <0>;
> + vdd3-supply = <&lcd_vdd3_reg>;
> + vci-supply = <&ldo25_reg>;
> + reset-gpio = <&gpy4 5 0>;
> + power-on-delay= <50>;
> + reset-delay = <100>;
> + init-delay = <100>;
> + flip-horizontal;
> + flip-vertical;
> + panel-width-mm = <58>;
> + panel-height-mm = <103>;

I have the same comment here as for the bridge chip: I would specify the
video ports/endpoints between DSI master and the panel, even if you
don't use them at the moment.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH] video: exynos_mipi_dsim: Remove unused variable

2013-11-26 Thread Tomi Valkeinen
On 2013-11-15 04:52, Sachin Kamat wrote:
> + Tomi
> 
> Hi Olof,
> 
> On 15 November 2013 02:39, Olof Johansson  wrote:
>> commit 7e0be9f9f7cba3356f75b86737dbe3a005da067e ('video: exynos_mipi_dsim:
>> Use the generic PHY driver') resulted in a warning about an unused
>> variable:
>>
>> drivers/video/exynos/exynos_mipi_dsi.c:144:26: warning: unused variable
>> 'pdev' [-Wunused-variable]
>>
>> It is indeed unused; remove it.
>>
>> Signed-off-by: Olof Johansson 
>> Cc: Sylwester Nawrocki 
>> ---
> 
> I had already sent a similar patch to fix this issue [1] which is
> reviewed by Kishon.
> But the patch that caused the warning was in Greg's tree at that time
> and he wanted
> the follow up patch to go through the video tree. I have pinged Tomi yesterday
> regarding this (now that his tree as well as the original patches are merged).
> 
> [1] http://www.spinics.net/lists/linux-fbdev/msg12755.html
> 

The one from Olof seems to have been merged, so I'll drop the one from
Sachin in my tree.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH V5 4/5] video: exynos_mipi_dsim: Use the generic PHY driver

2013-10-09 Thread Tomi Valkeinen
On 28/09/13 22:27, Sylwester Nawrocki wrote:
> Use the generic PHY API instead of the platform callback
> for the MIPI DSIM DPHY enable/reset control.
> 
> Signed-off-by: Sylwester Nawrocki 
> Signed-off-by: Kyungmin Park 
> Acked-by: Felipe Balbi 
> Acked-by: Donghwa Lee 
> ---
> Changes since v4:
>  - PHY label removed from the platform data structure.
> ---
>  drivers/video/exynos/Kconfig   |1 +
>  drivers/video/exynos/exynos_mipi_dsi.c |   19 ++-
>  include/video/exynos_mipi_dsim.h   |5 ++---
>  3 files changed, 13 insertions(+), 12 deletions(-)

Acked-by: Tomi Valkeinen 

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH] video: exynos: Ensure definitions match prototypes

2013-08-30 Thread Tomi Valkeinen
On 02/07/13 14:26, Mark Brown wrote:
> From: Mark Brown 
> 
> Ensure that the definitions of functions match the prototypes used by
> other modules by including the header with the prototypes in the files
> with the definitions.
> 
> Signed-off-by: Mark Brown 

Thanks, queued this for 3.12.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH 12/30] video/exynos: remove unnecessary header inclusions

2013-04-11 Thread Tomi Valkeinen
Hi,

On 2013-04-11 03:04, Arnd Bergmann wrote:
> In multiplatform configurations, we cannot include headers
> provided by only the exynos platform. Fortunately a number
> of drivers that include those headers do not actually need
> them, so we can just remove the inclusions.
> 
> Signed-off-by: Arnd Bergmann 
> Cc: linux-fb...@vger.kernel.org
> Cc: Jingoo Han 
> ---
>  drivers/video/exynos/exynos_mipi_dsi.c  | 2 --
>  drivers/video/exynos/exynos_mipi_dsi_common.c   | 2 --
>  drivers/video/exynos/exynos_mipi_dsi_lowlevel.c | 2 --
>  3 files changed, 6 deletions(-)

I've applied this and the next patch ([PATCH 13/30] video/s3c: move
platform_data out of arch/arm) to fbdev.

I also fixed the extra space mentioned by Jingoo.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH 0/4] Common Display Framework-TF

2013-02-11 Thread Tomi Valkeinen
On 2013-02-11 11:31, Marcus Lorentzon wrote:

> Ok, so what about a compromise which I think would work for "both" HWs
> ... we allow the "configure" operation during video on, then each HW
> driver could decide if this means you have to stop or not to do the
> changes required. But then it is also important that we keep all config
> in one struct and not split it up. Otherwise HW like ours that has so be
> stopped could need to stop once for each setting/operation called.
> So in short, allow configure(bus_params) during video on, keep all
> bus_params in the struct. It is in kernel struct so it can easily be
> changed/refactored.

Right, we need some way to group the configuration parameters. Either
one big struct, or something like start_config() and end_config(). I
think we could also think what parameters make no sense to be configured
on the fly, and possibly have those separately. Although it may be a bit
difficult to think of all the (weird) use cases.

> Again, this probing and bus muxing is platform/bus specific and not
> panel specific. Any panel of that type will only ever work on Omap (or

The parameters used for configuration itself is platform specific, and
that's why it needs to be defined in the DT data. But the API itself is
not platform specific. The parameters are information about how the
panel is connected, just like, say, number of data lines is for DPI.
Which is also something I think should be configured by the panel.

> someone else implementing the same muxing features) as I see it. So why

No, it works for everyone. Well, at least I still don't see anything
omap or platform specific in the API. Of course, if the platform/device
doesn't support modifying the pin functions, then the function does nothing.

> not just put that config on the bus master, dispc? I still can't see how
> this is panel config. You are only pushing CDF API and meta data to
> describe something that is only needed by one bus master. I have never

The information about how the panel is connected (the wiring) has to be
somewhere in the DT data. We could have the info in the DSI master or in
the DSI slave. Or, in some platforms where the DSS is not involved in
the muxing/config, the info could be totally separate, in the boards
pinmuxing data.

I think all those work ok for normal cases without hotplug. But if you
have hotplug, then you need separate pinmuxing/config data for each panel.

You could possibly have a list of panels in your DSI master node,
containing the muxing data, but that sounds rather hacky, and also very
hardcoded, i.e. you need to know each panel that is ever going to be
connected.

If, on the other hand, you have the info in the panel node, it "just
works". When a new panel is connected, the relevant panel DT data comes
from somewhere (it's not relevant where), and it tells the DSI master
how it's connected.

Something like this is probably needed for the BeagleBone capes, if you
have happened to follow the discussion. Although it could be argued that
the capes may perhaps be not runtime hotswappable, and thus the
configuration can be done only once during boot after the cape has been
probed. But I'd rather not design the API so that we prevent hot swapping.

> seen any DSI slave that can change their pin config. And since there is

Well, if omap is the only SoC/device out there that supports configuring
the pin functions, and everybody is against the API, I'm not going to
press it.

But then again, I think similar configuration support may be needed even
for the normal pinmuxing, even in the case where you can't reconfigure
the DSI pin functions. You still need to mux the pins (perhaps, say,
between DSI and a GPIO), depending on how many lanes the panel uses.

In fact, speaking about all pins in general, I'm not very fond of having
a static pinmuxing in the board DT data, handled by the board setup
code. I think generally the pins should be muxed to safe-mode by the
board setup code, and then configured to their proper function by the
driver when it is initializing.

> no generic hot plug detect of DSI panels, at least not before DSI bus is
> available, I have to assume this probing it very platform specific. We
> have some products that provide 1-2 gpios to specify what panel is
> available, some use I2C sensor probing to then assume the panel plugged.

Yes, the hotplug mechanism is platform/board specific. But that's not
relevant here.

> At least in the first step I don't think this type of hot plug should be
> in the API. Then once the base panel driver is in we could discuss
> different solutions for you hot plug scenario.

Yes, as I said, I also think we shouldn't aim for hotplug in the v1. But
we also shouldn't prevent it.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH 0/4] Common Display Framework-TF

2013-02-11 Thread Tomi Valkeinen
On 2013-02-08 16:54, Marcus Lorentzon wrote:
> On 02/08/2013 03:02 PM, Tomi Valkeinen wrote:
>> On 2013-02-08 15:28, Marcus Lorentzon wrote:
>>
>>> When we do that we stop->setup->start during blanking. So our "DSS" is
>>> optimized to be able to do that without getting blocked. All DSI video
>>> mode panels (and DPI) products we have done so far have not had any
>>> issue with that (as long as DSI HS clock is set to continuous). I think
>>> this approach is less platform dependant, as long as there is no SoC
>>> that take more than a blanking period to reconfigure.
>> So do you stop, setup and start the link with CPU, and this has to be
>> happen during blanking? Isn't that prone to errors? Or did you mean that
>> the hardware handles that automatically?
>>
>> In OMAP DSS there are so called shadow registers, that can be programmed
>> at any time. The you set a bit (GO bit), which tells the hardware to
>> take the new settings into use at the next vblank.
>>
>>  From DSI driver's perspective the link is never stopped when
>> reconfiguring the video timings. However, many other settings have to be
>> configured when the link is disabled.
> 
> Yeah, you lucky guys with the GO bit ;). No, we actually do CPU
> stop,setup,start. But since it is video mode, master is driving the sync
> so it is not a hard deadline. It is enough to restart before pixels
> start to degrade. On an LCD that is not so much time, but on an OLED it
> could be 10 secs :). Anyway, we have had several mass products with this
> soft solution and it has worked well.

Ah, ok. But in that case what you said in an earlier mail is not quite
correct: "I think this approach is less platform dependant, as long as
there is no SoC that take more than a blanking period to reconfigure.".
So in your approach the reconfiguration doesn't have to be done inside
the blanking period, but before the panel's picture starts to fade?

I don't know... It's early Monday morning, and not enough coffee yet,
but I get a bit uneasy feeling if I think that your method of
reconfiguring would be the only think allowed by the CDF API.

Some SoCs do support reconfiguring on the fly, without disabling the
link. It would not be nice if we didn't allow this to happen. And
actually, we're not only talking about SoCs here, but also any display
devices, like external buffer chips etc. They may also offer means to
change configs on the fly.

Well, I don't have any hard point about this at the moment, but I think
we should think different approaches how the configuration can be done.

> I understand, but removing the omap prefix doesn't mean it has to go in
> the panel slave port/bus settings. I still can't see why this should be
> configuration on the panel driver and not the DSI master driver. Number
> of pins might be useful since you might start with one lane and then
> activate the rest. But partial muxing (pre pinmux) doesn't seem to be
> something the panel should control or know anything about. Sounds like
> normal platform/DT data per product/board.

I think one case where this kind of pin configuration is needed, and
which also dictates that all panel related configuration has to be in
the panel's data, not in the DSI master's data, is hotplug.

If you have a board that has two panels connected to the same video
output, probably via some kind of mux, at least for the sensitive pins
like DSI, only one of the panels can be enabled at a time. The panels
can have different wiring, and thus the panel driver would need to
configure everything related to the bus when it's starting up.

The same also happens if you have a true hotplug, i.e. you can remove
the panel totally and plug in a new one. Again the wiring can be
different, and needs to be set up.

And, as I said, this means that all relevant data about the video bus
has to be in the panel's data, so that each panel can have its own set
of, say, pin configuration.

Hotplug is not a use case we should aim to support for the CDF v1, but I
think we should strive not to prevent hotplug either. So if we can
design the API so that hotplug is possible, I say let's do that.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH 0/4] Common Display Framework-TF

2013-02-08 Thread Tomi Valkeinen
On 2013-02-08 15:28, Marcus Lorentzon wrote:

> When we do that we stop->setup->start during blanking. So our "DSS" is
> optimized to be able to do that without getting blocked. All DSI video
> mode panels (and DPI) products we have done so far have not had any
> issue with that (as long as DSI HS clock is set to continuous). I think
> this approach is less platform dependant, as long as there is no SoC
> that take more than a blanking period to reconfigure.

So do you stop, setup and start the link with CPU, and this has to be
happen during blanking? Isn't that prone to errors? Or did you mean that
the hardware handles that automatically?

In OMAP DSS there are so called shadow registers, that can be programmed
at any time. The you set a bit (GO bit), which tells the hardware to
take the new settings into use at the next vblank.

From DSI driver's perspective the link is never stopped when
reconfiguring the video timings. However, many other settings have to be
configured when the link is disabled.

 In OMAP you can configure the DSI pins quite freely. We have the
 following struct:

 struct omap_dsi_pin_config {
  int num_pins;
  /*
   * pin numbers in the following order:
   * clk+, clk-
   * data1+, data1-
   * data2+, data2-
   * ...
   */
  int pins[OMAP_DSS_MAX_DSI_PINS];
 };


> I think it still is OMAP specifics and doesn't belong in the panel
> drivers any longer. If you revisit this requirement in the CDF context
> where DSI ifc parameters should describe how to interface with a panel
> outside the SoC, there can't really be any dependencies on SoC internal
> routing. As you say, this is inside pinmux, so how can that affect the
> SoC external interface? I would suggest moving this to dispc-dsilink DT
> settings that are activated on dsilink->enable/disable. At least that is
> how I plan to solve similar STE SoC specific DSI config settings that
> are not really CDF panel generic, like some DPhy trim settings. They do
> depend on the panel and clock speed, but they are more product specific
> than panel driver specific. Then if there are these type of settings
> that every SoC have, then we could look at standardize those. But for
> starters I would try to keep it in product/board-DT per DSI link. So we
> should try to differentiate between DSI host and slave bus params and
> keep slave params in panel driver.

Ok, I think I was being a bit vague here. I explained the OMAP DSI
routing not because I meant that this API is specific to that, but
because it explains why this kind of routing info is needed, and a
bitmask is not enough.

If you look at the omap_dsi_pin_config struct, there's nothing OMAP
specific there (except the names =). All it tells is that this device
uses N DSI pins, and the device's clk+ function should be connected to
pin X on the DSI master, clk- should be connected to pin Y, etc. X and Y
are integers, and what they mean is specific to the DSI master.

When the DSI master is OMAP's DSI, the OMAP DSI driver does the pin
configuration as I explained. When the DSI master is something else,
say, a DSI bridge, it does whatever it needs to do (which could be
nothing) to assign a particular DSI function to a pin.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH 0/4] Common Display Framework-TF

2013-02-08 Thread Tomi Valkeinen
On 2013-02-08 14:40, Marcus Lorentzon wrote:

> I agree, but I think it should be
> setup->enable->video_on->video_off->disable->setup->...
> I don't think there is any interface parameters that should be changed
> while link is enabled. And if there are, they should be identified and
> split out into a separate operation.

Hmm. At least on OMAP and DSI video mode, it is possible to change at
least timings on the fly. But yes, generally the link has to be disabled
first before changing any parameters.

>> In OMAP you can configure the DSI pins quite freely. We have the
>> following struct:
>>
>> struct omap_dsi_pin_config {
>> int num_pins;
>> /*
>>  * pin numbers in the following order:
>>  * clk+, clk-
>>  * data1+, data1-
>>  * data2+, data2-
>>  * ...
>>  */
>> int pins[OMAP_DSS_MAX_DSI_PINS];
>> };
>>
> Do you reroute after boot? Or is this just "board/product setup". We
> have some pinmuxing as well and DPhy sharing, but we have never used it
> after init/boot. If not runtime, I think this could be put in DT config
> for product instead of under dynamic control from panel.

The pin config with the struct above is done later, when the panel
driver configures the DSI bus. So in OMAP we have two distinct things
that need to be configured:

- The actual pin muxing, i.e. selecting the functions for pin N on the
OMAP package. The functions for a single pin could be for example GPIO
70, DSI DX0, UART1_CTS, etc. This is normally done once during board init.

- DSI pin configuration in the display subsystem. This is used to map
the pins to DSI functions. I.e. DSI DX0 pin is mapped to DATA1+. This is
done by the DSS driver, when the panel driver gives it the parameters.

So the first muxing basically assigns the pin to DSI in general, and
then DSI will internally route the pin to a an actual DSI function.

Whether the muxing needs to changed during runtime... I'm not sure. On
OMAP the DSI pin config also tells how many lanes are used. So if a DSI
panel would first want to use only one lane, and later change it to n
lanes, we'd need this kind of function.

I think it conceptually fits better if the pin config data is passed to
the panel via DT data, and the panel then gives the config to the DSI
master. It's just a part of the DSI bus parameters, like, say, clock
speed or whether to use HS or LP. That way the DT node for the panel
contains the information about the panel. (versus having pin config data
in the DSI master, in which case the DSI master's node contains data
about a specific DSI panel).

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH 0/4] Common Display Framework-TF

2013-02-08 Thread Tomi Valkeinen
Hi,

On 2013-02-03 21:17, Tomasz Figa wrote:
> Hi Laurent,
> 
> On Saturday 02 of February 2013 11:39:59 Laurent Pinchart wrote:
>> Hi Tomasz,
>>
>> Thank you for your RFC.
>>
>> On Wednesday 30 January 2013 16:38:59 Tomasz Figa wrote:

>>> Changes done to Tomi's edition of CDF:
>>>  - Replaced set_operation_mode, set_pixel_format and set_size video
>>>  source>  
>>>operations with get_params display entity operation, as it was
>>>originally in Laurent's version.
>>>
>>>We have discussed this matter on the mailing list and decided that
>>>it
>>>would be better to have the source ask the sink for its parameter
>>>structure and do whatever appropriate with it.
>>
>> Could you briefly outline the rationale behind this ?
> 
> Display interfaces may be described by an arbitrary number of parameters. 
> Some will require just video timings, others like DSI will require a 
> significant number of additional bus-specific ones.
> 
> Separating all the parameters (all of which are usually set at the same 
> time) into a lot of ops setting single parameter will just add unnecessary 
> complexity.

I have nothing against combining the parameters that way. I think the
important thing here is just that we have to allow changing of the
parameters, whenever the display device needs that. So the bus
parameters cannot be a one time constant setting.

>> I'm wondering whether get_params could limit us if a device needs to
>> modify parameters at runtime. For instance a panel might need to change
>> clock frequencies or number of used data lane depending on various
>> runtime conditions. I don't have a real use case here, so it might just
>> be a bit of over-engineering.
> 
> Hmm, isn't it in the opposite direction (the user requests the display 
> interface to change, for example, video mode, which in turn reconfigures 
> the link and requests the panel to update its settings)?

Well, now, that's the question.

Let's consider a simplified case with DSI output from the SoC, and a DSI
panel. If the panel is a rather simple one, you could well call a method
in the API in the DSI output driver, which would do necessary
configuration and inform the panel driver to do any configuration it
needs to do, based on the parameters.

However, in my experience display devices, DSI devices in particular,
are often quite "interesting". If the control of the operation in
question is in the DSI output side, we are limited about what kind of
DSI panels we can use, as the DSI output driver has to know all the
tricks needed to make the panels work.

I'm having hard time trying to explain this, so let's try an example.
Consider the initial powering up of the bus. If the DSI output driver is
in control, something like the following probably happens:

- output driver asks for the parameters from the panel driver
- output driver programs the DSI output according to these parameters
- output driver informs the panel that the bus is now up and running

Then consider a totally invented case, but which perhaps describes the
problem with the above approach: The DSI panel requires the DSI bus
first to be started in HS mode, but with a certain predefined bus speed,
and only one lane used. This predefined bus setup is used to send
configuration commands to the panel hardware, and only after that can
you reconfigure the bus with proper speed and lanes.

That kind of thing is a bit difficult to manage with the DSI output
driver is in control. However, if the DSI panel driver is in control,
it's simple:

- panel driver calls config methods in the DSI output driver, setting
the predefined speed and one lane
- panel driver sends configuration commands to the panel
- panel driver calls config methods in the DSI output driver, setting
the final bus config

>>>5. Mask of used data lanes (each bit represents single lane).
>>
>> From my experience with MIPI CSI (Camera Serial Interface, similar to
>> DSI) some transceivers can reroute data lanes internally. Is that true
>> for DSI as well ? In that case we would need a list of data lanes, not
>> just a mask.
> 
> Hmm, I don't remember at the moment, will have to look to the 
> specification. Exynos DSI master doesn't support such feature.

In OMAP you can configure the DSI pins quite freely. We have the
following struct:

struct omap_dsi_pin_config {
int num_pins;
/*
 * pin numbers in the following order:
 * clk+, clk-
 * data1+, data1-
 * data2+, data2-
 * ...
 */
int pins[OMAP_DSS_MAX_DSI_PINS];
};

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH 1/3] include: fb: Add definiton for window positioning structure

2011-09-20 Thread Tomi Valkeinen
On Tue, 2011-09-20 at 20:08 +0300, Baruch Siach wrote:
> Hi Ajay,
> 
> On Tue, Sep 20, 2011 at 08:56:57PM +0530, Ajay kumar wrote:
> > Hi Baruch,
> > On Tue, Sep 20, 2011 at 4:54 PM, Baruch Siach  wrote:
> > > Hi Ajay,
> > >
> > > On Tue, Sep 20, 2011 at 11:30:39AM -0400, Ajay Kumar wrote:
> > >> This patch adds a data structure definiton to hold framebuffer 
> > >> windows/planes.
> > >> An ioctl number is also added to provide user access
> > >> to change window position dynamically.
> > >
> > > [snip]
> > >
> > >> +/* Window overlaying */
> > >> +struct fb_overlay_win_pos {
> > >> + __u32 win_pos_x;/* x-offset from LCD(0,0) where window 
> > >> starts */
> > >> + __u32 win_pos_y;/* y-offset from LCD(0,0) where window 
> > >> starts */
> > >> +};
> > >
> > > Why not allow negative offsets where the left or upper part of the 
> > > framebuffer
> > > is hidden?
> > 
> > Thanks for pointing it out. Are there drivers which place the overlay
> > windows such that some part of the window is hidden from being
> > displayed on the screen?
> 
> I don't know. However, since this is new userspace ABI which should stay 
> compatible forever, we should make sure to do it right. Using __s32 instead 
> of 
> __u32 won't limit us in the future.

OMAP HW doesn't allow "funny" things like overlay being outside the
visible area, i.e. negative position or size larger than the display.
And my guess is that hardware rarely allow things like that, as it would
just complicate the hardware without any gain.

Out-of-display-overlays can of course be emulated by the software. But
I'm not sure if it's good to add the complexity in the driver layer, as
it could as well be handled in the userspace.

Then again, a signed value would be future safer ("just in case"), and
if the driver can just reject values it doesn't want to support, there's
no real harm there either.

 Tomi


--
To unsubscribe from this list: send the line "unsubscribe linux-samsung-soc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/3] include: fb: Add definiton for window positioning structure

2011-09-20 Thread Tomi Valkeinen
On Tue, 2011-09-20 at 16:55 +, Florian Tobias Schandinat wrote:

> Did you have a look at the (existing) API [1] Laurent proposed for discovering
> the internal connections between the framebuffers (or with any other devices)?

I know the basics of media controller, but I haven't really looked at
the code.

> If you agree that it'd be a good idea to use it I feel that we should make the
> windowing API more compatible with it. So basically what we want to have as a

I can't say if MC should be used for fb or not, but I think something
similar should be used if we want to get overlays etc. supported. But
even with MC (or something else) there, I'm not quite sure how they fit
into the fb model.

I think the core problem here is that in a more complex setup (at least
as I see it for OMAP) the fb should be just a framebuffer in memory,
without any direct relation to hardware. The hardware part (an overlay,
or whichever is the corresponding element) will then use the framebuffer
as the pixel source.

However, the current fb model combines those two things into one. If we
manage to separate them, and add MC or similar, I think it'll work.

> window is one or more sunk pads so the pad-index should be also part of the
> interface. I'm still confused with how OMAP works when it does not have a 
> "root"

Do you mean how the hardware works, or how I've designed omapfb driver?

> window/framebuffer. Normally I feel that the window position should be a
> property of the parent window as this is what the position is relative too. 
> But
> if the parent is no framebuffer, should we also include the entity into the
> interface to allow configuring things that are nor even framebuffers?

Right, I think you're pondering the core problem here =).

On OMAP we have the display (the whole area shown on the panel/tv),
which has video timings and things like that. But no pixel data as such
(a configurable background color is there, though).

And then we have the overlays, which are somewhere on the display, and
the overlays get pixel data from memory (framebuffers).

So in a way we have a contentless root window, but presenting that with
an fb device doesn't feel right. And the fb device would logically be
fb0, and if it couldn't show any content it couldn't be used as default
framebuffer.

> Also I think we need a z-index in case overlays overlap (might happen or?) and
> enforcing that all z-indexes are different for the same entity.

Yes, free z-order is supported in OMAP4. Previous OMAPs had fixed
z-order, although in certain configuration (enable/disable alpha
blending) the fixed z-order does change...

> As I see it xres/yres (together with xoffset/yoffset) is always the visible 
> part
> of the framebuffer. Typically that's also part of the timings as they define
> what is visible. With the introduction of overlays (and maybe even for some
> hardware anyway) it is no longer always true to have any timings at all. So on
> all framebuffer that do not have physical timings the timing interpretation is
> useless anyway (I'm thinking about adding a FB_CAP_NOTIMING) and what remains 
> is
> the interpretation of xres/yres as visible screen region.

For a system where there's always a root window for every output, plus
variable size overlays, FB_CAP_NOTIMING makes sense. But it would still
leave the problem if there's no root window. How to change timings on a
system like OMAP?

 Tomi


--
To unsubscribe from this list: send the line "unsubscribe linux-samsung-soc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/3] include: fb: Add definiton for window positioning structure

2011-09-20 Thread Tomi Valkeinen
On Tue, 2011-09-20 at 20:16 +0530, Ajay kumar wrote:
> Hi Tomi,
> 
> On Tue, Sep 20, 2011 at 4:40 PM, Tomi Valkeinen  wrote:
> > On Tue, 2011-09-20 at 11:30 -0400, Ajay Kumar wrote:
> >> This patch adds a data structure definiton to hold framebuffer 
> >> windows/planes.
> >> An ioctl number is also added to provide user access
> >> to change window position dynamically.
> >>
> >> Signed-off-by: Ajay Kumar 
> >> Signed-off-by: Banajit Goswami 
> >> Suggested-by: Marek Szyprowski 
> >> ---
> >>  include/linux/fb.h |7 +++
> >>  1 files changed, 7 insertions(+), 0 deletions(-)
> >>
> >> diff --git a/include/linux/fb.h b/include/linux/fb.h
> >> index 1d6836c..2141941 100644
> >> --- a/include/linux/fb.h
> >> +++ b/include/linux/fb.h
> >> @@ -39,6 +39,7 @@
> >>  #define FBIOPUT_MODEINFO0x4617
> >>  #define FBIOGET_DISPINFO0x4618
> >>  #define FBIO_WAITFORVSYNC_IOW('F', 0x20, __u32)
> >> +#define FBIOPOS_OVERLAY_WIN  _IOW('F', 0x21, struct fb_overlay_win_pos)
> >>
> >>  #define FB_TYPE_PACKED_PIXELS0   /* Packed Pixels 
> >>*/
> >>  #define FB_TYPE_PLANES   1   /* Non interleaved 
> >> planes */
> >> @@ -366,6 +367,12 @@ struct fb_image {
> >>   struct fb_cmap cmap;/* color map info */
> >>  };
> >>
> >> +/* Window overlaying */
> >> +struct fb_overlay_win_pos {
> >> + __u32 win_pos_x;/* x-offset from LCD(0,0) where window 
> >> starts */
> >> + __u32 win_pos_y;/* y-offset from LCD(0,0) where window 
> >> starts */
> >> +};
> >
> > Shouldn't this also include the window size (in case scaling is
> > supported)?
> 
> The "xres" and "yres" fields in fb_var_screeninfo are being used to
> represent the size of the window (visible resolution). So we have,
> 
> win_pos_x: x-offset from LCD(0,0) where window starts.
> win_pos_y: y-offset from LCD(0,0) where window starts.
> (win_pos_x + xres) : x-offset from LCD(0,0) where window ends.
> (win_pos_y + yres) : y-offset from LCD(0,0) where window ends.

Sure, but the xres and yres tell the _input_ resolution, i.e. how many
pixels are read from the memory. What is missing is the _output_
resolution, which is the size of the window. These are not necessarily
the same, if the system supports scaling.

> > This also won't work for setups where the same framebuffer is used by
> > multiple overlays. For example, this is the case on OMAP when the same
> > content is cloned to, say, LCD and TV, each of which is showing an
> > overlay.
> 
> These x and y position are used to configure the display controller
> (for LCD only) and not to alter the data in physical buffer
> (framebuffer). Could you elaborate the above use case you have
> mentioned and how adding the x and y offsets would not meet that
> requirement.

Nothing wrong with adding x/y offsets, but the problem is in configuring
the two overlays. If the framebuffer data is used by two overlays, each
overlay should be configured separately. And your ioctl does not have
any way to define which overlay is being affected.

Of course, if we specify that a single framebuffer will ever go only to
one output, the problem disappears.

However, even if we specify so, this will make the fbdev a bit weird:
what is x/yres after this patch? In the current fbdev x/yres is the size
of the output, and x/yres are part of video timings. After this patch
this is no longer the case: x/yres will be the size of the overlay. But
the old code will still use x/yres as part of video timings, making
things confusing.

And generally I can't really make my mind about adding these more
complex features. On one hand it would be very nice to have fbdev
supporting overlays and whatnot, but on the other hand, I can't figure
out how to add them properly.

 Tomi


--
To unsubscribe from this list: send the line "unsubscribe linux-samsung-soc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/3] include: fb: Add definiton for window positioning structure

2011-09-20 Thread Tomi Valkeinen
On Tue, 2011-09-20 at 11:30 -0400, Ajay Kumar wrote:
> This patch adds a data structure definiton to hold framebuffer windows/planes.
> An ioctl number is also added to provide user access
> to change window position dynamically.
> 
> Signed-off-by: Ajay Kumar 
> Signed-off-by: Banajit Goswami 
> Suggested-by: Marek Szyprowski 
> ---
>  include/linux/fb.h |7 +++
>  1 files changed, 7 insertions(+), 0 deletions(-)
> 
> diff --git a/include/linux/fb.h b/include/linux/fb.h
> index 1d6836c..2141941 100644
> --- a/include/linux/fb.h
> +++ b/include/linux/fb.h
> @@ -39,6 +39,7 @@
>  #define FBIOPUT_MODEINFO0x4617
>  #define FBIOGET_DISPINFO0x4618
>  #define FBIO_WAITFORVSYNC_IOW('F', 0x20, __u32)
> +#define FBIOPOS_OVERLAY_WIN  _IOW('F', 0x21, struct fb_overlay_win_pos)
>  
>  #define FB_TYPE_PACKED_PIXELS0   /* Packed Pixels
> */
>  #define FB_TYPE_PLANES   1   /* Non interleaved 
> planes */
> @@ -366,6 +367,12 @@ struct fb_image {
>   struct fb_cmap cmap;/* color map info */
>  };
>  
> +/* Window overlaying */
> +struct fb_overlay_win_pos {
> + __u32 win_pos_x;/* x-offset from LCD(0,0) where window starts */
> + __u32 win_pos_y;/* y-offset from LCD(0,0) where window starts */
> +};

Shouldn't this also include the window size (in case scaling is
supported)?

This also won't work for setups where the same framebuffer is used by
multiple overlays. For example, this is the case on OMAP when the same
content is cloned to, say, LCD and TV, each of which is showing an
overlay.

 Tomi


--
To unsubscribe from this list: send the line "unsubscribe linux-samsung-soc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/2] video: s3c-fb: Add window positioning support

2011-09-02 Thread Tomi Valkeinen
Hi,

On Thu, 2011-09-01 at 16:45 +, Florian Tobias Schandinat wrote:
> Hi all,
> 
> On 08/25/2011 07:51 PM, Ajay Kumar wrote:
> > Just as a note, there are many drivers like mx3fb.c, au1200fb.c and OMAP
> > seem to be doing window/plane positioning in their driver code.
> > Is it possible to have this window positioning support at a common place?
> 
> Good point. Congratulations for figuring out that I like to standardize 
> things.
> But I think your suggestion is far from being enough to be useful for 
> userspace
> (which is our goal so that applications can be reused along drivers and don't
> need to know about individual drivers).
> 
> So let me at first summarize how I understand you implemented those things 
> after
> having a brief look at some of the drivers:
> Windows are rectangular screen areas whose pixel data come from other 
> locations.
> The other locations are accessible via other framebuffer devices (e.g. fb1). 
> So
> in this area the data of fb1 is shown and not the data of fb0 that would be
> normally shown.

Here's what we have on OMAP:

We have a bunch of hardware overlays, each of which can go to one
output. When using fbdev, the pixel data for overlays comes from the
framebuffers. One fb can be used as a  pixel source for multiple
overlays.

So, for example, the "connections" can be like these:

   Initial configuration

 .-. .--.   .--.
 | fb0 |>| ovl0 |-.>| LCD  |
 '-' '--' | '--'
 .-. .--. |
 | fb1 |>| ovl1 |-|
 '-' '--' |
 .-. .--. | .--.
 | fb2 |>| ovl2 |-' |  TV  |
 '-' '--'   '--'


  Video on fb1, shown on LCD and TV

 .-. .--.   .--.
 | fb0 |>| ovl0 |-.>| LCD  |
 '-' '--' | '--'
 .-. .--. |
 | fb1 |.--->| ovl1 |-'
 '-'|'--'
 .-.|.--.   .--.
 | fb2 |'--->| ovl2 |-->|  TV  |
 '-' '--'   '--'


And how the actual image is composited on the display, we have first the
fb side (no news there):

 .-fb-vxres-.
 |  |
 |  |
 |   (xoffset,yoffset)  |
 |   .---fb-xres.   |
 f   |  |   |
 b   f  |   |
 |   b  |   |
 v   |  |   |
 y   y  |   |
 r   r  |   |
 e   e  |   |
 s   s  |   |
 |   |  |   |
 |   '--'   |
 |  |
 |  |
 '--'

The area marked by x/yoffset and x/yres is used as the pixel source for
the overlay.

On the display we have something like this:

 .disp-xres.
 | |
 | (xpos,ypos) |
 |  .-ovl-xres-.   |
 d  |  |   |
 i  o  |   |
 s  v  |   |
 p  l  |   |
 |  |  |   |
 y  y  |   |
 r  r  |   |
 e  e  |   |
 s  s  |   |
 |  |  |   |
 |  '--'   |
 | |
 '-'

The x/ypos of the overlay does not have any relation to the x/yoffset of
the framebuffer. The overlay's x/yres is the same as the fb's x/yres in
case the overlay doesn't support scaling (OMAP's case this is true for
overlay0). Otherwise the overlay's x/yres is only limited by the HW's
scaling capabilities.

The overlays have a priority order, and where there's no overlay, a
background color is used. So like this:

 .-.
 |background color |
 | .---.   |
 | | ovl0  |   |
 | |   |   |
 | | .--.  |   |
 | | | ovl1 |  |   |
 | | |  |  |   |
 | | |.---.|
 | | ||   ovl2||
 | | '