Re: [PATCH 5/7] omapfb: omapfb_dss.h: add stubs to build with COMPILE_TEST && DRM_OMAP

2018-04-26 Thread Tomi Valkeinen
On 25/04/18 14:13, Bartlomiej Zolnierkiewicz wrote:
> 
> On Monday, April 23, 2018 05:11:14 PM Tomi Valkeinen wrote:
>> On 23/04/18 16:56, Bartlomiej Zolnierkiewicz wrote:
>>
>>> Ideally we should be able to build both drivers in the same kernel
>>> (especially as modules).
>>>
>>> I was hoping that it could be fixed easily but then I discovered
>>> the root source of the problem:
>>>
>>> drivers/gpu/drm/omapdrm/dss/display.o: In function 
>>> `omapdss_unregister_display':
>>> display.c:(.text+0x2c): multiple definition of `omapdss_unregister_display'
>>> drivers/video/fbdev/omap2/omapfb/dss/display.o:display.c:(.text+0x198): 
>>> first defined here
>>
>> The main problem is that omapdrm and omapfb are two different drivers
>> for the same HW. You need to pick one, even if we would change those
>> functions and fix the link issue.
> 
> With proper resource allocation in both drivers this shouldn't be
> a problem - the one which allocates resources first will be used
> (+ we can initialize omapdrm first in case it is built-in). This is
> how similar situations are handled in other kernel subsystems.

We have boot time code, which is always built-in, for both drivers which
adjusts device tree data. I imagine those could easily conflict. Maybe
there's something else too.

And it's not only about the main omapfb or omapdrm driver. We have
drivers for the encoders and panels. Those would conflict too. I guess
we could have the case where omapdrm probes, and then a panel driver
from omapfb probes.

Actually, many of the panel and encoder drivers probably have same
symbols too, which would prevent linking.

> It seems that the real root problem is commit f76ee892a99e ("omapfb:
> copy omapdss & displays for omapfb") from Dec 2015 which resulted in
> duplication of ~30 KLOC of code. The code in question seems to be
> both fbdev & drm independent:
> 
> "
> * omapdss, located in drivers/video/fbdev/omap2/dss/. This is a driver 
> for the
>   display subsystem IPs used on OMAP (and related) SoCs. It offers only a
>   kernel internal API, and does not implement anything for fbdev or drm.
> 
> * omapdss panels and encoders, located in
>   drivers/video/fbdev/omap2/displays-new/. These are panel and external 
> encoder
>   drivers, which use APIs offered by omapdss driver. These also don't 
> implement
>   anything for fbdev or drm.
> "
> 
> While I understand some motives behind this change I'm not overall
> happy with it..

Neither was I, but I have to say it was a game-changer for omapdrm
development. Doing anything new on omapdrm meant trying to get it to
work on omapfb too (in the same commit, so cross-tree), and many changes
were just too complex to even try. After that commit we were free to
really start restructuring the code to fit the DRM world.

>> At some point in time we could compile both as modules (but not
>> built-in), but the only use for that was development/testing and in the
>> end made our life more difficult. So, now you must fully disable one of
>> them to enable the other. And, actually, we even have boot-time code,
>> not included in the module itself, which gets enabled when omapdrm is
>> enabled.
> 
> Do you mean some code in arch/arm/mach-omap2/ or something else?

That and the omapdss-boot-init.c we have for both omapfb and omapdrm.

>> While it's of course good to support COMPILE_TEST, if using COMPILE_TEST
>> with omapfb is problematic, I'm not sure if it's worth to spend time on
>> that. We should be moving away from omapfb to omapdrm.
> 
> Is there some approximate schedule for omapfb removal available?

Unfortunatey not. omapfb still has support for some legacy devices that
omapdrm doesn't support.

 Tomi

-- 
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki


Re: [PATCH 5/7] omapfb: omapfb_dss.h: add stubs to build with COMPILE_TEST && DRM_OMAP

2018-04-25 Thread Tomi Valkeinen
On 25/04/18 13:02, Laurent Pinchart wrote:
> Hi Tomi,
> 
> On Wednesday, 25 April 2018 12:33:53 EEST Tomi Valkeinen wrote:
>> On 25/04/18 12:03, Laurent Pinchart wrote:
>>> Could we trim down omapfb to remove support for the devices supported by
>>> omapdrm ?
>>
>> I was thinking about just that. But, of course, it's not quite
>> straightforward either.
>>
>> We've got DSI manual update functionality in OMAP3-OMAP5 SoCs, which
>> covers a lot of devices.
> 
> Sebastian is working on getting that feature in omapdrm, isn't he ?

Yes, and I keep pushing it forward because of the restructuring you're
doing =) (feel free to comment on that thread). But agreed, it's getting
better. When we have manual update support, I think the biggest missing
feature is then in omapdrm.

>> And VRFB on OMAP2/3.
> 
> And that's something I'd really like to have in omapdrm too.

Considering how much headache TILER has given, I'm not exactly looking
forward to it ;).

If we get manual update and VRFB, I think we are more or less covered on
the supported HW features. It'll still break userspace apps which use
omapfb, though. Unless we also port the omapfb specific IOCTLs and the
sysfs files, which I believe we should not.

 Tomi

-- 
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki


Re: [PATCH 5/7] omapfb: omapfb_dss.h: add stubs to build with COMPILE_TEST && DRM_OMAP

2018-04-25 Thread Tomi Valkeinen
On 25/04/18 12:03, Laurent Pinchart wrote:

> Could we trim down omapfb to remove support for the devices supported by 
> omapdrm ?

I was thinking about just that. But, of course, it's not quite
straightforward either.

We've got DSI manual update functionality in OMAP3-OMAP5 SoCs, which
covers a lot of devices. And VRFB on OMAP2/3. Those need omapfb.

 Tomi

-- 
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki


Re: [PATCH 5/7] omapfb: omapfb_dss.h: add stubs to build with COMPILE_TEST && DRM_OMAP

2018-04-25 Thread Tomi Valkeinen
On 23/04/18 23:09, Mauro Carvalho Chehab wrote:

>> I don't think it's worth it renaming the common symbols. They will change 
>> over 
>> time as omapdrm is under heavy rework, and it's painful enough without 
>> having 
>> to handle cross-tree changes.
> 
> It could just rename the namespace-conflicting FB_OMAP2 functions,
> keeping the DRM ones as-is.

Yes, I'm fine with renaming omapfb functions if that helps. But still,
if omapdrm is enabled in the kernel as module or built-in, omapfb will
not work. So even if we get them to compile and link, it'll break at
runtime one way or another.

>> Let's just live with the fact that both drivers 
>> can't be compiled at the same time, given that omapfb is deprecated.
> 
> IMO, a driver that it is deprecated, being in a state where it
> conflicts with a non-deprecated driver that is under heavy rework
> is a very good candidate to go to drivers/staging or even to /dev/null.

The problem is that it supports old devices which are not supported by
omapdrm. But both omapfb and omapdrm support many of the same devices.

 Tomi

-- 
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki


Re: [PATCH 5/7] omapfb: omapfb_dss.h: add stubs to build with COMPILE_TEST && DRM_OMAP

2018-04-23 Thread Tomi Valkeinen
On 23/04/18 16:56, Bartlomiej Zolnierkiewicz wrote:

> Ideally we should be able to build both drivers in the same kernel
> (especially as modules).
> 
> I was hoping that it could be fixed easily but then I discovered
> the root source of the problem:
> 
> drivers/gpu/drm/omapdrm/dss/display.o: In function 
> `omapdss_unregister_display':
> display.c:(.text+0x2c): multiple definition of `omapdss_unregister_display'
> drivers/video/fbdev/omap2/omapfb/dss/display.o:display.c:(.text+0x198): first 
> defined here

The main problem is that omapdrm and omapfb are two different drivers
for the same HW. You need to pick one, even if we would change those
functions and fix the link issue.

At some point in time we could compile both as modules (but not
built-in), but the only use for that was development/testing and in the
end made our life more difficult. So, now you must fully disable one of
them to enable the other. And, actually, we even have boot-time code,
not included in the module itself, which gets enabled when omapdrm is
enabled.

While it's of course good to support COMPILE_TEST, if using COMPILE_TEST
with omapfb is problematic, I'm not sure if it's worth to spend time on
that. We should be moving away from omapfb to omapdrm.

 Tomi

-- 
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki


Re: [PATCH v2 15/19] omap2: omapfb: allow building it with COMPILE_TEST

2018-04-06 Thread Tomi Valkeinen
On 05/04/18 23:29, Mauro Carvalho Chehab wrote:
> This driver builds cleanly with COMPILE_TEST, and it is
> needed in order to allow building drivers/media omap2
> driver.
> 
> So, change the logic there to allow building it.
> 
> Signed-off-by: Mauro Carvalho Chehab <mche...@s-opensource.com>
> ---
>  drivers/video/fbdev/omap2/Kconfig | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/video/fbdev/omap2/Kconfig 
> b/drivers/video/fbdev/omap2/Kconfig
> index 0921c4de8407..82008699d253 100644
> --- a/drivers/video/fbdev/omap2/Kconfig
> +++ b/drivers/video/fbdev/omap2/Kconfig
> @@ -1,4 +1,4 @@
> -if ARCH_OMAP2PLUS
> +if ARCH_OMAP2PLUS || COMPILE_TEST
>  
>  source "drivers/video/fbdev/omap2/omapfb/Kconfig"
>  
> 

Acked-by: Tomi Valkeinen <tomi.valkei...@ti.com>

 Tomi

-- 
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki


Re: [PATCH for 4.15] omapdrm/dss/hdmi4_cec: fix interrupt handling

2017-12-19 Thread Tomi Valkeinen

On 04/12/17 15:32, Hans Verkuil wrote:

The omap4 CEC hardware cannot tell a Nack from a Low Drive from an
Arbitration Lost error, so just report a Nack, which is almost
certainly the reason for the error anyway.

This also simplifies the implementation. The only three interrupts
that need to be enabled are:

Transmit Buffer Full/Empty Change event: triggered when the
transmit finished successfully and cleared the buffer.

Receiver FIFO Not Empty event: triggered when a message was received.

Frame Retransmit Count Exceeded event: triggered when a transmit
failed repeatedly, usually due to the message being Nacked. Other
reasons are possible (Low Drive, Arbitration Lost) but there is no
way to know. If this happens the TX buffer needs to be cleared
manually.

While testing various error conditions I noticed that the hardware
can receive messages up to 18 bytes in total, which exceeds the legal
maximum of 16. This could cause a buffer overflow, so we check for
this and constrain the size to 16 bytes.

The old incorrect interrupt handler could cause the CEC framework to
enter into a bad state because it mis-detected the "Start Bit Irregularity
event" as an ARB_LOST transmit error when it actually is a receive error
which should be ignored.

Signed-off-by: Hans Verkuil 
Reported-by: Henrik Austad 
Tested-by: Henrik Austad 
Tested-by: Hans Verkuil 
---
  drivers/gpu/drm/omapdrm/dss/hdmi4_cec.c | 46 +++--
  1 file changed, 9 insertions(+), 37 deletions(-)


Thanks, I have picked up this patch.

 Tomi

--
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. 
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki


Re: [PATCHv2 0/9] omapdrm: hdmi4: add CEC support

2017-10-12 Thread Tomi Valkeinen

Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. 
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki

On 12/10/17 12:42, Hans Verkuil wrote:
> On 10/12/17 10:03, Tomi Valkeinen wrote:
>>
>> Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. 
>> Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
>>
>> On 12/10/17 09:50, Hans Verkuil wrote:
>>
>>>> I can't test with a TV, so no CEC for me... But otherwise I think the
>>>> series works ok now, and looks ok. So I'll apply, but it's a bit late
>>>> for the next merge window, so I'll aim for 4.15 with this.
>>>
>>> What is the status? Do you need anything from me? I'd like to get this in 
>>> for 4.15.
>>
>> Thanks for reminding. I think I would've forgotten...
>>
>> I sent the pull request, so all should be fine.
>>
>> If possible, please test the pull request, preferably with drm-next
>> merged (git://people.freedesktop.org/~airlied/linux drm-next), as I
>> don't have a CEC capable display.
> 
> I'll try to do that tomorrow or Monday.
> 
> I have one other question for you: does keeping ls_oe_gpio high all the
> time affect this old bug fix:
> 
> https://github.com/myfluxi/xxICSKernel/commit/21189f03d3ec3a74d9949907c828410d7a9a86d5

No, the issue is about the HDMI PHY. The i2c lines are not related.
LS_OE only affects the i2c.

> I don't think so, but I'm not 100% sure. As far as I can see the PHY power
> sequence (OFF, TXON, LDOON) does not change and that seems to be the
> crucial fix according to the commit above.
> 
> I would hate being responsible for lots of burnt-out pandaboards :-)
> 
> There is an alternative solution: when there is no HPD then it is also
> possible to pull the ls_oe_gpio high only when transmitting a CEC message.
> 
> That would obviously require some code changes.
> 
> Sorry for raising this issue so late, but this just came up today in
> internal discussions.

Well, it would be nice to not have LS_OE enabled all the time. But I'm
not sure how much that really matters. I'm sure it uses some power, but
is that even measurable, I have no idea.

 Tomi



Re: [PATCHv2 0/9] omapdrm: hdmi4: add CEC support

2017-10-12 Thread Tomi Valkeinen

Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. 
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki

On 12/10/17 09:50, Hans Verkuil wrote:

>> I can't test with a TV, so no CEC for me... But otherwise I think the
>> series works ok now, and looks ok. So I'll apply, but it's a bit late
>> for the next merge window, so I'll aim for 4.15 with this.
> 
> What is the status? Do you need anything from me? I'd like to get this in for 
> 4.15.

Thanks for reminding. I think I would've forgotten...

I sent the pull request, so all should be fine.

If possible, please test the pull request, preferably with drm-next
merged (git://people.freedesktop.org/~airlied/linux drm-next), as I
don't have a CEC capable display.

 Tomi



Re: [PATCHv2 0/9] omapdrm: hdmi4: add CEC support

2017-08-22 Thread Tomi Valkeinen
Hi Hans,

>> 
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. 
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki

On 11/08/17 13:57, Tomi Valkeinen wrote:
>>
>>> I'm doing some testing with this series on my panda. One issue I see is
>>> that when I unload the display modules, I get:
>>>
>>> [   75.180206] platform 58006000.encoder: enabled after unload, idling
>>> [   75.187896] platform 58001000.dispc: enabled after unload, idling
>>> [   75.198242] platform 5800.dss: enabled after unload, idling
>>
>> This one is caused by hdmi_cec_adap_enable() never getting called with
>> enable=false when unloading the modules. Should that be called
>> explicitly in hdmi4_cec_uninit, or is the CEC framework supposed to call it?
> 
> Nicely found!
> 
> The cec_delete_adapter() function calls 
> __cec_s_phys_addr(CEC_PHYS_ADDR_INVALID)
> which would normally call adap_enable(false), except when the device node was
> already unregistered, in which case it just returns immediately.
> 
> The patch below should fix this. Let me know if it works, and I'll post a 
> proper
> patch and get that in for 4.14 (and possible backported as well, I'll have to
> look at that).

Thanks, this fixes the issue.

I again saw "HDMICORE: omapdss HDMICORE error: operation stopped when
reading edid" when I loaded the modules. My panda also froze just now
when unloading the display modules, and it doesn't react to sysrq.

After testing a bit without the CEC patches, I saw the above error, so I
don't think it's related to your patches.

I can't test with a TV, so no CEC for me... But otherwise I think the
series works ok now, and looks ok. So I'll apply, but it's a bit late
for the next merge window, so I'll aim for 4.15 with this.

 Tomi



Re: [PATCHv2 0/9] omapdrm: hdmi4: add CEC support

2017-08-18 Thread Tomi Valkeinen
Hi Hans,


Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. 
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki

On 11/08/17 13:57, Tomi Valkeinen wrote:

> I'm doing some testing with this series on my panda. One issue I see is
> that when I unload the display modules, I get:
> 
> [   75.180206] platform 58006000.encoder: enabled after unload, idling
> [   75.187896] platform 58001000.dispc: enabled after unload, idling
> [   75.198242] platform 5800.dss: enabled after unload, idling

This one is caused by hdmi_cec_adap_enable() never getting called with
enable=false when unloading the modules. Should that be called
explicitly in hdmi4_cec_uninit, or is the CEC framework supposed to call it?

 Tomi



Re: [PATCHv2 0/9] omapdrm: hdmi4: add CEC support

2017-08-17 Thread Tomi Valkeinen

Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. 
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki

On 11/08/17 13:57, Tomi Valkeinen wrote:
> Hi Hans,
> 
> On 02/08/17 11:53, Hans Verkuil wrote:
>> From: Hans Verkuil <hans.verk...@cisco.com>
>>
>> This patch series adds CEC support for the omap4. It is based on
>> the 4.13-rc2 kernel with this patch series applied:
>>
>> http://www.spinics.net/lists/dri-devel/msg143440.html
>>
>> It is virtually identical to the first patch series posted in
>> April:
>>
>> http://www.spinics.net/lists/dri-devel/msg138950.html
>>
>> The only two changes are in the Kconfig due to CEC Kconfig
>> changes in 4.13 (it now selects CEC_CORE instead of depending on
>> CEC_CORE) and a final patch was added adding a lost_hotplug op
>> since for proper CEC support I have to know when the hotplug
>> signal goes away.
>>
>> Tested with my Pandaboard.
> 
> I'm doing some testing with this series on my panda. One issue I see is
> that when I unload the display modules, I get:
> 
> [   75.180206] platform 58006000.encoder: enabled after unload, idling
> [   75.187896] platform 58001000.dispc: enabled after unload, idling
> [   75.198242] platform 5800.dss: enabled after unload, idling
> 
> So I think something is left enabled, most likely in the HDMI driver. I
> haven't debugged this yet.
> 
> The first time I loaded the modules I also got "operation stopped when
> reading edid", but I haven't seen that since. Possibly not related to
> this series.

Sorry that I have had very little time to debug this. I rebased the cec code
on top of the latest omapdrm patches, and tested on AM5 EVM (which is more or
less equivalent to OMAP5 on the HDMI front). I get the following crash when
I turn on my monitor, which causes a HPD irq.

I'll continue looking at this as soon as I again find time, but I thought I'll
share what I have at the moment. I've pushed the branch to:

git://git.kernel.org/pub/scm/linux/kernel/git/tomba/linux.git 4.14/omapdrm-cec

 Tomi

[   34.640159] Unable to handle kernel NULL pointer dereference at virtual 
address 
[   34.648449] pgd = c0004000
[   34.651249] [] *pgd=
[   34.654921] Internal error: Oops: 8007 [#1] PREEMPT SMP ARM
[   34.660879] Modules linked in: omapdrm drm_kms_helper drm connector_dvi 
panel_dsi_cm panel_dpi connector_analog_tv connector_hdmi encode
r_tpd12s015 encoder_tfp410 omapdss omapdss_base snd_soc_omap_hdmi_audio cec 
cfbfillrect cfbimgblt cfbcopyarea [last unloaded: omapdss_base]
[   34.685482] CPU: 0 PID: 264 Comm: irq/248-tpd12s0 Not tainted 
4.13.0-rc5-00626-gbf51300abae9 #99
[   34.694314] Hardware name: Generic DRA74X (Flattened Device Tree)
[   34.700442] task: ed108140 task.stack: ed19
[   34.705002] PC is at 0x0
[   34.707561] LR is at tpd_detect+0x3c/0x44 [encoder_tpd12s015]
[   34.713340] pc : [<>]lr : []psr: 600c0013
[   34.719642] sp : ed191ee8  ip : ed191e68  fp : ed191efc
[   34.724897] r10: 0001  r9 : ee2e0200  r8 : c01b45c8
[   34.730153] r7 : ed10b064  r6 :   r5 : bf1d716c  r4 : 
[   34.736716] r3 :   r2 :   r1 :   r0 : bf1d716c
[   34.743283] Flags: nZCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment none
[   34.750458] Control: 10c5387d  Table: ad26406a  DAC: 0051
[   34.756236] Process irq/248-tpd12s0 (pid: 264, stack limit = 0xed190218)
[   34.762976] Stack: (0xed191ee8 to 0xed192000)
[   34.767363] 1ee0:   ed27e610 ed27e6d4 ed191f14 ed191f00 
bf200388 bf200310
[   34.775587] 1f00: ed10b040 ee2e0200 ed191f34 ed191f18 c01b433c bf200354 
ed19 0001
[   34.783812] 1f20:  ed10b064 ed191f74 ed191f38 c01b4660 c01b4324 
c01b4318 ed10b040
[   34.792036] 1f40:  c01b4418 ed191f74 ed1ebc80  ed392500 
ed19 ed10b040
[   34.800261] 1f60: ed1ebcb8 ed24fb08 ed191fac ed191f78 c0163e60 c01b4504 
 c01b44f8
[   34.808486] 1f80: ed191fac ed392500 c0163d30    
 
[   34.816710] 1fa0:  ed191fb0 c0108af0 c0163d3c   
 
[   34.824934] 1fc0:       
 
[   34.833157] 1fe0:     0013  
 
[   34.841376] Backtrace: 
[   34.843861] [] (tpd_detect [encoder_tpd12s015]) from [] 
(tpd_hpd_isr+0x40/0x68 [encoder_tpd12s015])
[   34.854701]  r5:ed27e6d4 r4:ed27e610
[   34.858310] [] (tpd_hpd_isr [encoder_tpd12s015]) from [] 
(irq_thread_fn+0x24/0x5c)
[   34.867667]  r5:ee2e0200 r4:ed10b040
[   34.871270] [] (irq_thread_fn) from [] 
(irq_thread+0x168/0x254)
[   34.878970]  r7:ed10b064 r6: r5:0001 r4:ed19
[   34.884670] [] (irq_thread) from [] (kthread+0x130/0x174)
[   34.891847

Re: [PATCHv2 0/9] omapdrm: hdmi4: add CEC support

2017-08-11 Thread Tomi Valkeinen
Hi Hans,


Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. 
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki

On 02/08/17 11:53, Hans Verkuil wrote:
> From: Hans Verkuil 
> 
> This patch series adds CEC support for the omap4. It is based on
> the 4.13-rc2 kernel with this patch series applied:
> 
> http://www.spinics.net/lists/dri-devel/msg143440.html
> 
> It is virtually identical to the first patch series posted in
> April:
> 
> http://www.spinics.net/lists/dri-devel/msg138950.html
> 
> The only two changes are in the Kconfig due to CEC Kconfig
> changes in 4.13 (it now selects CEC_CORE instead of depending on
> CEC_CORE) and a final patch was added adding a lost_hotplug op
> since for proper CEC support I have to know when the hotplug
> signal goes away.
> 
> Tested with my Pandaboard.

I'm doing some testing with this series on my panda. One issue I see is
that when I unload the display modules, I get:

[   75.180206] platform 58006000.encoder: enabled after unload, idling
[   75.187896] platform 58001000.dispc: enabled after unload, idling
[   75.198242] platform 5800.dss: enabled after unload, idling

So I think something is left enabled, most likely in the HDMI driver. I
haven't debugged this yet.

The first time I loaded the modules I also got "operation stopped when
reading edid", but I haven't seen that since. Possibly not related to
this series.

Are there some simple ways to test the CEC? My buildroot fs has
cec-compliance, cec-ctl and cec-follower commands. Are you familiar with
those? Can they be used?

 Tomi



Re: [PATCH 8/8] omapdrm: hdmi4: hook up the HDMI CEC support

2017-06-08 Thread Tomi Valkeinen
On 08/06/17 10:34, Hans Verkuil wrote:

>> Peter is about to send hotplug-interrupt-handling series, I think the
>> HPD function work should be done on top of that, as otherwise it'll just
>> conflict horribly.
> 
> Has that been merged yet? And if so, what git repo/branch should I base
> my next version of this patch series on? If not, do you know when it is
> expected?

No, still pending review. The patches ("[PATCH v2 0/3] drm/omap: Support
for hotplug detection") apply on top of latest drm-next, if you want to try.

 Tomi



signature.asc
Description: OpenPGP digital signature


Re: [PATCH 8/8] omapdrm: hdmi4: hook up the HDMI CEC support

2017-05-08 Thread Tomi Valkeinen
On 06/05/17 14:58, Hans Verkuil wrote:

> My assumption was that hdmi_display_disable() was called when the hotplug 
> would go
> away. But I discovered that that isn't the case, or at least not when X is 
> running.
> It seems that the actual HPD check is done in hdmic_detect() in
> omapdrm/displays/connector-hdmi.c.

For some HW it's done there (in the case there's no IP handling the
HPD), but in some cases it's done in tpd12s015 driver (e.g. pandaboard),
and in some cases it also could be done in the hdmi driver (if the HPD
is handled by the HDMI IP, but at the moment we don't have this case
supported in the SW).

> But there I have no access to hdmi.core (needed for the 
> hdmi4_cec_set_phys_addr() call).
> 
> Any idea how to solve this? I am not all that familiar with drm, let alone 
> omapdrm,
> so if you can point me in the right direction, then that would be very 
> helpful.

Hmm, indeed, looks the the output is kept enabled even if HPD drops and
the connector status is changed to disconnected.

I don't have a very good solution... I think we have to add a function
to omapdss_hdmi_ops, which the connector-hdmi and tpd12s015 drivers can
call when they detect a HPD change. That call would go to the HDMI IP
driver.

Peter is about to send hotplug-interrupt-handling series, I think the
HPD function work should be done on top of that, as otherwise it'll just
conflict horribly.

 Tomi



signature.asc
Description: OpenPGP digital signature


Re: [PATCH 0/8] omapdrm: add OMAP4 CEC support

2017-04-28 Thread Tomi Valkeinen
On 14/04/17 13:25, Hans Verkuil wrote:
> From: Hans Verkuil 
> 
> This patch series adds support for the OMAP4 HDMI CEC IP core.

What is this series based on? It doesn't apply to drm-next, and:
fatal: sha1 information is lacking or useless
(drivers/gpu/drm/omapdrm/dss/hdmi4.c).

 Tomi



signature.asc
Description: OpenPGP digital signature


Re: [PATCH 2/8] omapdrm: encoder-tpd12s015: keep ls_oe_gpio high if CEC is enabled

2017-04-28 Thread Tomi Valkeinen
On 14/04/17 13:25, Hans Verkuil wrote:
> From: Hans Verkuil 
> 
> When the OMAP4 CEC support is enabled the CEC pin should always
> be on. So keep ls_oe_gpio high when CONFIG_OMAP4_DSS_HDMI_CEC
> is set.
> 
> Background: even if the HPD is low it should still be possible
> to use CEC. Some displays will set the HPD low when they go into standby or
> when they switch to another input, but CEC is still available and able
> to wake up/change input for such a display.
> 
> This is explicitly allowed by the CEC standard.
> 
> Signed-off-by: Hans Verkuil 
> ---
>  drivers/gpu/drm/omapdrm/displays/encoder-tpd12s015.c | 8 
>  1 file changed, 8 insertions(+)
> 
> diff --git a/drivers/gpu/drm/omapdrm/displays/encoder-tpd12s015.c 
> b/drivers/gpu/drm/omapdrm/displays/encoder-tpd12s015.c
> index 58276a48112e..757554e6d62f 100644
> --- a/drivers/gpu/drm/omapdrm/displays/encoder-tpd12s015.c
> +++ b/drivers/gpu/drm/omapdrm/displays/encoder-tpd12s015.c
> @@ -46,6 +46,9 @@ static int tpd_connect(struct omap_dss_device *dssdev,
>   dssdev->dst = dst;
>  
>   gpiod_set_value_cansleep(ddata->ct_cp_hpd_gpio, 1);
> +#ifdef CONFIG_OMAP4_DSS_HDMI_CEC
> + gpiod_set_value_cansleep(ddata->ls_oe_gpio, 1);
> +#endif
>   /* DC-DC converter needs at max 300us to get to 90% of 5V */
>   udelay(300);
>  
> @@ -64,6 +67,7 @@ static void tpd_disconnect(struct omap_dss_device *dssdev,
>   return;
>  
>   gpiod_set_value_cansleep(ddata->ct_cp_hpd_gpio, 0);
> + gpiod_set_value_cansleep(ddata->ls_oe_gpio, 0);
>  
>   dst->src = NULL;
>   dssdev->dst = NULL;
> @@ -146,11 +150,15 @@ static int tpd_read_edid(struct omap_dss_device *dssdev,
>   if (!gpiod_get_value_cansleep(ddata->hpd_gpio))
>   return -ENODEV;
>  
> +#ifndef CONFIG_OMAP4_DSS_HDMI_CEC
>   gpiod_set_value_cansleep(ddata->ls_oe_gpio, 1);
> +#endif
>  
>   r = in->ops.hdmi->read_edid(in, edid, len);
>  
> +#ifndef CONFIG_OMAP4_DSS_HDMI_CEC
>   gpiod_set_value_cansleep(ddata->ls_oe_gpio, 0);
> +#endif
>  
>   return r;
>  }
> 

Optimally, we want to enable LS_OE only when needed, which would be when
reading EDID, using HDCP, or using CEC. But we don't have good means to
convey that information at the moment, and I'd rather leave it for later
when we have done the bigger restructuring of omapdrm.

For now, instead of the ifdef-confusion, I think we should just enable
the LS_OE in tpd_connect and be done with it.

 Tomi



signature.asc
Description: OpenPGP digital signature


Re: [PATCH 6/8] omapdrm: hdmi4: refcount hdmi_power_on/off_core

2017-04-28 Thread Tomi Valkeinen
On 14/04/17 13:25, Hans Verkuil wrote:
> From: Hans Verkuil 
> 
> The hdmi_power_on/off_core functions can be called multiple times:
> when the HPD changes and when the HDMI CEC support needs to power
> the HDMI core.
> 
> So use a counter to know when to really power on or off the HDMI core.
> 
> Also call hdmi4_core_powerdown_disable() in hdmi_power_on_core() to
> power up the HDMI core (needed for CEC).
> 
> Signed-off-by: Hans Verkuil 
> ---
>  drivers/gpu/drm/omapdrm/dss/hdmi4.c | 12 +++-
>  1 file changed, 11 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/omapdrm/dss/hdmi4.c 
> b/drivers/gpu/drm/omapdrm/dss/hdmi4.c
> index 4a164dc01f15..e371b47ff6ff 100644
> --- a/drivers/gpu/drm/omapdrm/dss/hdmi4.c
> +++ b/drivers/gpu/drm/omapdrm/dss/hdmi4.c
> @@ -124,14 +124,19 @@ static int hdmi_power_on_core(struct omap_dss_device 
> *dssdev)
>  {
>   int r;
>  
> + if (hdmi.core.core_pwr_cnt++)
> + return 0;
> +

How's the locking between the CEC side and the DRM side? Normally these
functions are protected with the DRM modesetting locks, but CEC doesn't
come from there. We have the hdmi.lock, did you check that it's held
when CEC side calls shared functions?

>   r = regulator_enable(hdmi.vdda_reg);
>   if (r)
> - return r;
> + goto err_reg_enable;
>  
>   r = hdmi_runtime_get();
>   if (r)
>   goto err_runtime_get;
>  
> + hdmi4_core_powerdown_disable();

I'd like to have the powerdown_disable as a separate patch. Also, now
that you call it here, I believe it can be dropped from hdmi4_configure().

Hmm, but in hdmi4_configure we call hdmi_core_swreset_assert() before
hdmi4_core_powerdown_disable(). I wonder what exactly that does, and
whether we end up resetting also the CEC parts when we change the videomode.

 Tomi



signature.asc
Description: OpenPGP digital signature


Re: [PATCH 1/8] arm: omap4: enable CEC pin for Pandaboard A4 and ES

2017-04-28 Thread Tomi Valkeinen
On 14/04/17 13:25, Hans Verkuil wrote:
> From: Hans Verkuil <hans.verk...@cisco.com>
> 
> The CEC pin was always pulled up, making it impossible to use it.
> 
> Change to PIN_INPUT so it can be used by the new CEC support.
> 
> Signed-off-by: Hans Verkuil <hans.verk...@cisco.com>
> ---
>  arch/arm/boot/dts/omap4-panda-a4.dts | 2 +-
>  arch/arm/boot/dts/omap4-panda-es.dts | 2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm/boot/dts/omap4-panda-a4.dts 
> b/arch/arm/boot/dts/omap4-panda-a4.dts
> index 78d363177762..f1a6476af371 100644
> --- a/arch/arm/boot/dts/omap4-panda-a4.dts
> +++ b/arch/arm/boot/dts/omap4-panda-a4.dts
> @@ -13,7 +13,7 @@
>  /* Pandaboard Rev A4+ have external pullups on SCL & SDA */
>  _hdmi_pins {
>   pinctrl-single,pins = <
> - OMAP4_IOPAD(0x09a, PIN_INPUT_PULLUP | MUX_MODE0)/* 
> hdmi_cec.hdmi_cec */
> + OMAP4_IOPAD(0x09a, PIN_INPUT | MUX_MODE0)   /* 
> hdmi_cec.hdmi_cec */
>   OMAP4_IOPAD(0x09c, PIN_INPUT | MUX_MODE0)   /* 
> hdmi_scl.hdmi_scl */
>   OMAP4_IOPAD(0x09e, PIN_INPUT | MUX_MODE0)   /* 
> hdmi_sda.hdmi_sda */
>   >;
> diff --git a/arch/arm/boot/dts/omap4-panda-es.dts 
> b/arch/arm/boot/dts/omap4-panda-es.dts
> index 119f8e657edc..940fe4f7c5f6 100644
> --- a/arch/arm/boot/dts/omap4-panda-es.dts
> +++ b/arch/arm/boot/dts/omap4-panda-es.dts
> @@ -34,7 +34,7 @@
>  /* PandaboardES has external pullups on SCL & SDA */
>  _hdmi_pins {
>   pinctrl-single,pins = <
> - OMAP4_IOPAD(0x09a, PIN_INPUT_PULLUP | MUX_MODE0)/* 
> hdmi_cec.hdmi_cec */
> + OMAP4_IOPAD(0x09a, PIN_INPUT | MUX_MODE0)   /* 
> hdmi_cec.hdmi_cec */
>   OMAP4_IOPAD(0x09c, PIN_INPUT | MUX_MODE0)   /* 
> hdmi_scl.hdmi_scl */
>   OMAP4_IOPAD(0x09e, PIN_INPUT | MUX_MODE0)   /* 
> hdmi_sda.hdmi_sda */
>   >;
> 

Reviewed-by: Tomi Valkeinen <tomi.valkei...@ti.com>

Tony, can you queue this? It's safe to apply separately from the rest of
the HDMI CEC work.

 Tomi



signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH 3/3] encoder-tpd12s015: keep the ls_oe_gpio on while the phys_addr is valid

2017-04-13 Thread Tomi Valkeinen
On 13/04/17 12:12, Hans Verkuil wrote:

>> Is there anything else CEC needs to access or control (besides the CEC
>> IP itself)?
> 
> The CEC framework needs to be informed about the physical address contained
> in the EDID (part of the CEA-861 block). And when the HPD goes down it needs
> to be informed as well (same call, but with CEC_PHYS_ADDR_INVALID as 
> argument).

Ah, hmm... And currently that's (kind of) handled in
hdmi_power_off_full() by setting CEC_PHYS_ADDR_INVALID? That's not the
same thing as HPD off, though, but maybe it's enough (for now).

 Tomi



signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH 3/3] encoder-tpd12s015: keep the ls_oe_gpio on while the phys_addr is valid

2017-04-13 Thread Tomi Valkeinen
On 12/04/17 17:04, Hans Verkuil wrote:

>> So is some other driver supporting this already? Or is the omap4 the
>> first platform you're trying this on?
> 
> No, there are quite a few CEC drivers by now, but typically the CEC block is
> a totally independent IP block with its own power, irq, etc. The omap4 is by 
> far
> the most complex one to set up with various GPIO pins, interrupts, regulators,
> etc. to deal with.
> 
> Normally it takes about 2 days to make a new CEC driver, but the omap4 is much
> more work :-(

Ok.

I mentioned the omapdrm restructuring that we've planned to do, I think
after that this will be easier to implement in a nice way.

For now, I think more or less what you have now is an acceptable
solution. We can hack the tpd12s015 to keep the level shifter always
enabled, and, afaics, everything else can be handled inside the hdmi4
driver, right?

Generally speaking, what are the "dependencies" for CEC? It needs to
access EDID? Does CEC care about HPD? Does it care if the cable is
connected or not? For Panda, the level shifter of tpd12s015 is obviously
one hard dendency.

Is there anything else CEC needs to access or control (besides the CEC
IP itself)?

 Tomi



signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH 3/3] encoder-tpd12s015: keep the ls_oe_gpio on while the phys_addr is valid

2017-04-12 Thread Tomi Valkeinen
On 12/04/17 16:03, Hans Verkuil wrote:

> I noticed while experimenting with this that tpd_disconnect() in
> drivers/gpu/drm/omapdrm/displays/encoder-tpd12s015.c isn't called when
> I disconnect the HDMI cable. Is that a bug somewhere?
> 
> I would expect that tpd_connect and tpd_disconnect are balanced. The 
> tpd_enable
> and tpd_disable calls are properly balanced and I see the tpd_disable when I
> disconnect the HDMI cable.

The connect/disconnect naming there is legacy... It's not about cable
connect, it's about the initial "connect" of the drivers in the video
pipeline. It's done just once when starting omapdrm.

> The key to keeping CEC up and running, even when there is no HPD is to keep
> the hdmi.vdda_reg regulator enabled. Also the HDMI_IRQ_CORE should always be
> on, otherwise I won't get any CEC interrupts.

At the moment there's no way to enable the pipeline without enabling the
video.

> So if the omap4 CEC support is enabled in the kernel config, then always 
> enable
> this regulator and irq, and otherwise keep the current code.

Well, I have no clue about how CEC is used, but don't we have some
userspace components using it? I presume there's an open() or something
similar that signals that the userspace is interested in CEC. That
should be the trigger to enable the HW required for CEC.

So is some other driver supporting this already? Or is the omap4 the
first platform you're trying this on?

 Tomi



signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH 3/3] encoder-tpd12s015: keep the ls_oe_gpio on while the phys_addr is valid

2017-04-10 Thread Tomi Valkeinen
On 08/04/17 13:11, Hans Verkuil wrote:

> So, this is a bit of a blast from the past since the omap4 CEC development
> has been on hold for almost a year. But I am about to resume my work on this
> now that the CEC framework was merged.
> 
> The latest code is here, if you are interested:
> 
> https://git.linuxtv.org/hverkuil/media_tree.git/log/?h=panda-cec
> 
> It's pretty much unchanged from the version I posted a year ago, just rebased.
> 
> But before I continue with this I have one question for you. First some
> background:
> 
> There is a special corner case (and I wasn't aware of that a year ago!) where
> it is allowed to send a CEC message when there is *no HPD*.
> 
> The reason is that some displays turn off the hotplug detect pin when they go
> into standby or when another input is active. The only way to communicate with
> such displays is via CEC.
> 
> The problem is that without a HPD there is no EDID and basically no way for an
> HDMI transmitter to detect that something is connected at all, unless you are
> using CEC.
> 
> What this means is that if we want to implement this on the omap4 the CEC 
> support
> has to be on all the time.
> 
> We have seen modern displays that behave like this, so this is a real issue. 
> And
> this corner case is specifically allowed by the CEC specification: the Poll,
> Image/Text View On and the Active Source messages can be sent to a TV even 
> when
> there is no HPD in order to turn on the display if it was in standby and to 
> make
> us the active input.
> 
> The CEC framework in the kernel supports this starting with 4.12 (this code is
> in the panda-cec branch above).
> 
> If this *can't* be supported by the omap4, then I will likely have to add a 
> CEC
> capability to signal to the application that this specific corner case is not
> supported.
> 
> I just did some tests with omap4 and I my impression is that this can't be
> supported: when the HPD goes away it seems like most/all of the HDMI blocks 
> are
> all powered off and any attempt to even access the CEC registers will fail.
> 
> Changing this looks to be non-trivial if not impossible.
> 
> Can you confirm that that isn't possible? If you think this can be done, then
> I'd appreciate if you can give me a few pointers.

HPD doesn't control the HW, it's all in the SW. So while I don't know
much at all about CEC and haven't looked at this particular use case, I
believe it's doable. HPD going off will make the DRM connector to be in
"disconnected" state, which on omapdrm will cause everything about HDMI
to be turned off.

Does it work on some other DRM driver? I'm wondering if there's
something in the DRM framework side that also has to be changed, in
addition to omapdrm changes.

It could require larger SW redesigns, though... Which makes me think
that the work shouldn't be done until we have changed the omapdrm's
driver model to DRM's common bridge driver model, and then all this
could possibly be done in a more generic manner.

Well, then again, I think the hdmi driver's internal power state
handling could be improved even before that. Currently it's not very
versatile. We should have ways to partially enable the IP for just the
required parts.

 Tomi



signature.asc
Description: OpenPGP digital signature


Re: [Patch 0/2] media: ti-vpe: allow user specified stride

2017-02-17 Thread Tomi Valkeinen
On 13/02/17 15:06, Benoit Parrot wrote:
> This patch series enables user specified buffer stride to be used
> instead of always forcing the stride from the driver side.
> 
> Benoit Parrot (2):
>   media: ti-vpe: vpdma: add support for user specified stride
>   media: ti-vpe: vpe: allow use of user specified stride
> 
>  drivers/media/platform/ti-vpe/vpdma.c | 14 --
>  drivers/media/platform/ti-vpe/vpdma.h |  6 +++---
>  drivers/media/platform/ti-vpe/vpe.c   | 34 --
>  3 files changed, 31 insertions(+), 23 deletions(-)

Reviewed-by: Tomi Valkeinen <tomi.valkei...@ti.com>

 Tomi



signature.asc
Description: OpenPGP digital signature


Re: [Patch 2/2] media: ti-vpe: vpe: allow use of user specified stride

2017-02-15 Thread Tomi Valkeinen
Hi,

On 13/02/17 15:06, Benoit Parrot wrote:
> Bytesperline/stride was always overwritten by VPE to the most
> adequate value based on needed alignment.
> 
> However in order to make use of arbitrary size DMA buffer it
> is better to use the user space provide stride instead.
> 
> The driver will still calculate an appropriate stride but will
> use the provided one when it is larger than the calculated one.
> 
> Signed-off-by: Benoit Parrot 
> ---
>  drivers/media/platform/ti-vpe/vpe.c | 28 
>  1 file changed, 20 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/media/platform/ti-vpe/vpe.c 
> b/drivers/media/platform/ti-vpe/vpe.c
> index 2dd67232b3bc..c47151495b6f 100644
> --- a/drivers/media/platform/ti-vpe/vpe.c
> +++ b/drivers/media/platform/ti-vpe/vpe.c
> @@ -1597,6 +1597,7 @@ static int __vpe_try_fmt(struct vpe_ctx *ctx, struct 
> v4l2_format *f,
>   struct v4l2_plane_pix_format *plane_fmt;
>   unsigned int w_align;
>   int i, depth, depth_bytes, height;
> + unsigned int stride = 0;
>  
>   if (!fmt || !(fmt->types & type)) {
>   vpe_err(ctx->dev, "Fourcc format (0x%08x) invalid.\n",
> @@ -1683,16 +1684,27 @@ static int __vpe_try_fmt(struct vpe_ctx *ctx, struct 
> v4l2_format *f,
>   plane_fmt = >plane_fmt[i];
>   depth = fmt->vpdma_fmt[i]->depth;
>  
> - if (i == VPE_LUMA)
> - plane_fmt->bytesperline = (pix->width * depth) >> 3;
> - else
> - plane_fmt->bytesperline = pix->width;
> + stride = (pix->width * fmt->vpdma_fmt[VPE_LUMA]->depth) >> 3;
> + if (stride > plane_fmt->bytesperline)
> + plane_fmt->bytesperline = stride;

The old code calculates different bytes per line for luma and chroma,
but the new one calculates only for luma. Is that correct?

 Tomi



signature.asc
Description: OpenPGP digital signature


Re: [PATCH] [media] ti-vpe: get rid of some smatch warnings

2016-11-24 Thread Tomi Valkeinen
Hi,

On 22/11/16 13:09, Mauro Carvalho Chehab wrote:
> When compiled on i386, it produces several warnings:
> 
>   ./arch/x86/include/asm/bitops.h:457:22: warning: asm output is not an 
> lvalue
>   ./arch/x86/include/asm/bitops.h:457:22: warning: asm output is not an 
> lvalue
>   ./arch/x86/include/asm/bitops.h:457:22: warning: asm output is not an 
> lvalue
>   ./arch/x86/include/asm/bitops.h:457:22: warning: asm output is not an 
> lvalue
>   ./arch/x86/include/asm/bitops.h:457:22: warning: asm output is not an 
> lvalue
>   ./arch/x86/include/asm/bitops.h:457:22: warning: asm output is not an 
> lvalue
> 
> I suspect that some gcc optimization could be causing the asm code to be
> incorrectly generated. Splitting it into two macro calls fix the issues
> and gets us rid of 6 smatch warnings, with is a good thing. As it should
> not cause any troubles, as we're basically doing the same thing, let's
> apply such change to vpe.c.
> 
> Cc: Benoit Parrot 
> Cc: Hans Verkuil 
> Signed-off-by: Mauro Carvalho Chehab 
> ---
>  drivers/media/platform/ti-vpe/vpe.c | 18 +++---
>  1 file changed, 15 insertions(+), 3 deletions(-)

I think the point of COMPILE_TEST is to improve the quality of the code.
This patch doesn't improve the quality, on the contrary.

If those warnings on a (buggy?) i386 gcc are a problem, I suggest
removing COMPILE_TEST for vpe.

 Tomi



signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH 2/3] omap4: add CEC support

2016-05-10 Thread Tomi Valkeinen
On 10/05/16 15:26, Hans Verkuil wrote:

>>> diff --git a/arch/arm/boot/dts/omap4.dtsi b/arch/arm/boot/dts/omap4.dtsi
>>> index 2bd9c83..1bb490f 100644
>>> --- a/arch/arm/boot/dts/omap4.dtsi
>>> +++ b/arch/arm/boot/dts/omap4.dtsi
>>> @@ -1006,8 +1006,9 @@
>>> reg = <0x58006000 0x200>,
>>>   <0x58006200 0x100>,
>>>   <0x58006300 0x100>,
>>> - <0x58006400 0x1000>;
>>> -   reg-names = "wp", "pll", "phy", "core";
>>> + <0x58006400 0x900>,
>>> + <0x58006D00 0x100>;
>>> +   reg-names = "wp", "pll", "phy", "core", "cec";
>>
>> "core" contains four blocks, all of which are currently included there
>> in the "core" space. I'm not sure why they weren't split up properly
>> when the driver was written, but I think we should either keep the core
>> as one big block, or split it up to those four sections, instead of just
>> separating the CEC block.
> 
> I don't entirely agree with that, partially because it would mean extra work 
> for
> me :-) and partially because CEC is different from the other blocks in that it
> is an optional HDMI feature.

I don't think it matters in this context if it's an optional HDMI
feature or not. This is about representing the HW memory areas, and I'd
like to keep it consistent, so either one big block, or if we want to
split it, split it up properly as shown in the TRM.

I'm fine with keeping one big "core" memory area there, that should work
fine for CEC, shouldn't it? And it would be the easiest option for you ;).

> 
>>
>>> interrupts = ;
>>> status = "disabled";
>>> ti,hwmods = "dss_hdmi";
>>> diff --git a/drivers/gpu/drm/omapdrm/dss/Kconfig 
>>> b/drivers/gpu/drm/omapdrm/dss/Kconfig
>>> index d1fa730..69638e9 100644
>>> --- a/drivers/gpu/drm/omapdrm/dss/Kconfig
>>> +++ b/drivers/gpu/drm/omapdrm/dss/Kconfig
>>> @@ -71,9 +71,17 @@ config OMAP4_DSS_HDMI
>>> bool "HDMI support for OMAP4"
>>>  default y
>>> select OMAP2_DSS_HDMI_COMMON
>>> +   select MEDIA_CEC_EDID
>>
>> Hmm, what's in MEDIA_CEC_EDID, why does OMAP4 HDMI need to select that?
> 
> Helper functions that manipulate the physical address in an EDID. CEC may be
> optional, but the EDID isn't. These functions were just too big to make them
> static inlines, so instead it's a simple module.

Oh, I see, even if OMAP4's HDMI CEC is disabled, you use cec edid
functions in hdmi4.c.

>>> diff --git a/drivers/gpu/drm/omapdrm/dss/Makefile 
>>> b/drivers/gpu/drm/omapdrm/dss/Makefile
>>> index b651ec9..37eb597 100644
>>> --- a/drivers/gpu/drm/omapdrm/dss/Makefile
>>> +++ b/drivers/gpu/drm/omapdrm/dss/Makefile
>>> @@ -10,6 +10,9 @@ omapdss-$(CONFIG_OMAP2_DSS_SDI) += sdi.o
>>>  omapdss-$(CONFIG_OMAP2_DSS_DSI) += dsi.o
>>>  omapdss-$(CONFIG_OMAP2_DSS_HDMI_COMMON) += hdmi_common.o hdmi_wp.o 
>>> hdmi_pll.o \
>>> hdmi_phy.o
>>> +ifeq ($(CONFIG_OMAP2_DSS_HDMI_CEC),y)
>>> +  omapdss-$(CONFIG_OMAP2_DSS_HDMI_COMMON) += hdmi_cec.o
>>> +endif
>>
>> The file should be hdmi4_cec.o, as it's for omap4. And why the ifeq?
>> Isn't just
>>
>> omapdss-$(OMAP4_DSS_HDMI_CEC) += hdmi4_cec.o
> 
> OMAP4_DSS_HDMI_CEC is a bool, not a tristate.

Yes, and that's fine. You're not compiling hdmi4_cec.o as a module,
you're compiling it into omapdss.o. Objects are added with "omapdss-y +=
xyz" to omapdss.o.

 Tomi



signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH 2/3] omap4: add CEC support

2016-05-10 Thread Tomi Valkeinen
Hi Hans,

On 29/04/16 12:39, Hans Verkuil wrote:
> From: Hans Verkuil 
> 
> Signed-off-by: Hans Verkuil 
> ---
>  arch/arm/boot/dts/omap4-panda-a4.dts   |   2 +-
>  arch/arm/boot/dts/omap4-panda-es.dts   |   2 +-
>  arch/arm/boot/dts/omap4.dtsi   |   5 +-
>  drivers/gpu/drm/omapdrm/dss/Kconfig|   8 +
>  drivers/gpu/drm/omapdrm/dss/Makefile   |   3 +
>  drivers/gpu/drm/omapdrm/dss/hdmi.h |  62 ++-
>  drivers/gpu/drm/omapdrm/dss/hdmi4.c|  39 +++-
>  drivers/gpu/drm/omapdrm/dss/hdmi_cec.c | 329 
> +
>  8 files changed, 441 insertions(+), 9 deletions(-)
>  create mode 100644 drivers/gpu/drm/omapdrm/dss/hdmi_cec.c

First, thanks for working on this! It's really nice if we get CEC working.

Are you planning to continue working on this patch, or is this a
proof-of-concept, and you want to move on to other things? I'm fine with
both, the patch looks quite good and I'm sure I can continue from here
if you want.

Also, what's the status of the general CEC support, will these patches
work on v4.7?

> diff --git a/arch/arm/boot/dts/omap4-panda-a4.dts 
> b/arch/arm/boot/dts/omap4-panda-a4.dts
> index 78d3631..f0c1020 100644
> --- a/arch/arm/boot/dts/omap4-panda-a4.dts
> +++ b/arch/arm/boot/dts/omap4-panda-a4.dts
> @@ -13,7 +13,7 @@
>  /* Pandaboard Rev A4+ have external pullups on SCL & SDA */
>  _hdmi_pins {
>   pinctrl-single,pins = <
> - OMAP4_IOPAD(0x09a, PIN_INPUT_PULLUP | MUX_MODE0)/* 
> hdmi_cec.hdmi_cec */
> + OMAP4_IOPAD(0x09a, PIN_INPUT | MUX_MODE0)   /* 
> hdmi_cec.hdmi_cec */

Looks fine as we discussed, but these need to be split to separate patch
(with explanation, of course =).

>   OMAP4_IOPAD(0x09c, PIN_INPUT | MUX_MODE0)   /* 
> hdmi_scl.hdmi_scl */
>   OMAP4_IOPAD(0x09e, PIN_INPUT | MUX_MODE0)   /* 
> hdmi_sda.hdmi_sda */
>   >;
> diff --git a/arch/arm/boot/dts/omap4-panda-es.dts 
> b/arch/arm/boot/dts/omap4-panda-es.dts
> index 119f8e6..940fe4f 100644
> --- a/arch/arm/boot/dts/omap4-panda-es.dts
> +++ b/arch/arm/boot/dts/omap4-panda-es.dts
> @@ -34,7 +34,7 @@
>  /* PandaboardES has external pullups on SCL & SDA */
>  _hdmi_pins {
>   pinctrl-single,pins = <
> - OMAP4_IOPAD(0x09a, PIN_INPUT_PULLUP | MUX_MODE0)/* 
> hdmi_cec.hdmi_cec */
> + OMAP4_IOPAD(0x09a, PIN_INPUT | MUX_MODE0)   /* 
> hdmi_cec.hdmi_cec */
>   OMAP4_IOPAD(0x09c, PIN_INPUT | MUX_MODE0)   /* 
> hdmi_scl.hdmi_scl */
>   OMAP4_IOPAD(0x09e, PIN_INPUT | MUX_MODE0)   /* 
> hdmi_sda.hdmi_sda */
>   >;
> diff --git a/arch/arm/boot/dts/omap4.dtsi b/arch/arm/boot/dts/omap4.dtsi
> index 2bd9c83..1bb490f 100644
> --- a/arch/arm/boot/dts/omap4.dtsi
> +++ b/arch/arm/boot/dts/omap4.dtsi
> @@ -1006,8 +1006,9 @@
>   reg = <0x58006000 0x200>,
> <0x58006200 0x100>,
> <0x58006300 0x100>,
> -   <0x58006400 0x1000>;
> - reg-names = "wp", "pll", "phy", "core";
> +   <0x58006400 0x900>,
> +   <0x58006D00 0x100>;
> + reg-names = "wp", "pll", "phy", "core", "cec";

"core" contains four blocks, all of which are currently included there
in the "core" space. I'm not sure why they weren't split up properly
when the driver was written, but I think we should either keep the core
as one big block, or split it up to those four sections, instead of just
separating the CEC block.

>   interrupts = ;
>   status = "disabled";
>   ti,hwmods = "dss_hdmi";
> diff --git a/drivers/gpu/drm/omapdrm/dss/Kconfig 
> b/drivers/gpu/drm/omapdrm/dss/Kconfig
> index d1fa730..69638e9 100644
> --- a/drivers/gpu/drm/omapdrm/dss/Kconfig
> +++ b/drivers/gpu/drm/omapdrm/dss/Kconfig
> @@ -71,9 +71,17 @@ config OMAP4_DSS_HDMI
>   bool "HDMI support for OMAP4"
>  default y
>   select OMAP2_DSS_HDMI_COMMON
> + select MEDIA_CEC_EDID

Hmm, what's in MEDIA_CEC_EDID, why does OMAP4 HDMI need to select that?

>   help
> HDMI support for OMAP4 based SoCs.
>  
> +config OMAP2_DSS_HDMI_CEC

This should probably be OMAP2_DSS_HDMI4_CEC or such, as it's only for
OMAP4 HDMI. Or, following the omap4 hdmi's config name,
"OMAP4_DSS_HDMI_CEC".

> + bool "Enable HDMI CEC support for OMAP4"
> + depends on OMAP4_DSS_HDMI && MEDIA_CEC
> + default y
> + ---help---
> +   When selected the HDMI transmitter will support the CEC feature.
> +
>  config OMAP5_DSS_HDMI
>   bool "HDMI support for OMAP5"
>   default n
> diff --git a/drivers/gpu/drm/omapdrm/dss/Makefile 
> b/drivers/gpu/drm/omapdrm/dss/Makefile
> index 

Re: [RFC PATCH 3/3] encoder-tpd12s015: keep the ls_oe_gpio on while the phys_addr is valid

2016-05-10 Thread Tomi Valkeinen
Hi Hans,

On 29/04/16 12:39, Hans Verkuil wrote:
> From: Hans Verkuil <hans.verk...@cisco.com>
> 
> As long as there is a valid physical address in the EDID and the omap
> CEC support is enabled, then we keep ls_oe_gpio on to ensure the CEC
> signal is passed through the tpd12s015.
> 
> Signed-off-by: Hans Verkuil <hans.verk...@cisco.com>
> Suggested-by: Tomi Valkeinen <tomi.valkei...@ti.com>
> ---
>  drivers/gpu/drm/omapdrm/displays/encoder-tpd12s015.c | 13 -
>  1 file changed, 12 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/omapdrm/displays/encoder-tpd12s015.c 
> b/drivers/gpu/drm/omapdrm/displays/encoder-tpd12s015.c
> index 916a899..efbba23 100644
> --- a/drivers/gpu/drm/omapdrm/displays/encoder-tpd12s015.c
> +++ b/drivers/gpu/drm/omapdrm/displays/encoder-tpd12s015.c
> @@ -16,6 +16,7 @@
>  #include 
>  #include 
>  
> +#include 
>  #include 
>  #include 
>  
> @@ -65,6 +66,7 @@ static void tpd_disconnect(struct omap_dss_device *dssdev,
>   return;
>  
>   gpiod_set_value_cansleep(ddata->ct_cp_hpd_gpio, 0);
> + gpiod_set_value_cansleep(ddata->ls_oe_gpio, 0);
>  
>   dst->src = NULL;
>   dssdev->dst = NULL;
> @@ -142,6 +144,7 @@ static int tpd_read_edid(struct omap_dss_device *dssdev,
>  {
>   struct panel_drv_data *ddata = to_panel_data(dssdev);
>   struct omap_dss_device *in = ddata->in;
> + bool valid_phys_addr = 0;
>   int r;
>  
>   if (!gpiod_get_value_cansleep(ddata->hpd_gpio))
> @@ -151,7 +154,15 @@ static int tpd_read_edid(struct omap_dss_device *dssdev,
>  
>   r = in->ops.hdmi->read_edid(in, edid, len);
>  
> - gpiod_set_value_cansleep(ddata->ls_oe_gpio, 0);
> +#ifdef CONFIG_OMAP2_DSS_HDMI_CEC
> + /*
> +  * In order to support CEC this pin should remain high
> +  * as long as the EDID has a valid physical address.
> +  */
> + valid_phys_addr =
> + cec_get_edid_phys_addr(edid, r, NULL) != CEC_PHYS_ADDR_INVALID;
> +#endif
> + gpiod_set_value_cansleep(ddata->ls_oe_gpio, valid_phys_addr);
>  
>   return r;
>  }

I think this works, but... Maybe it would be cleaner to have the LS_OE
enabled if a cable is connected. That's actually what we had earlier,
but I removed that due to a race issue:

a87a6d6b09de3118e5679c2057b99b7791b7673b ("OMAPDSS: encoder-tpd12s015:
Fix race issue with LS_OE"). Now, with CEC, there's need to have LS_OE
enabled even after reading the EDID, so I think it's better to go back
to the old model (after fixing the race issue, of course =).

 Tomi



signature.asc
Description: OpenPGP digital signature


Re: [PATCH 2/9] [media] media: omap_vout: Convert omap_vout_uservirt_to_phys() to use get_vaddr_pfns()

2015-06-12 Thread Tomi Valkeinen


On 12/06/15 12:26, Laurent Pinchart wrote:
 Hi Tomi,
 
 On Friday 12 June 2015 12:21:13 Tomi Valkeinen wrote:
 On 11/06/15 07:21, Laurent Pinchart wrote:
 On Wednesday 10 June 2015 06:20:45 Mauro Carvalho Chehab wrote:
 From: Jan Kara j...@suse.cz

 Convert omap_vout_uservirt_to_phys() to use get_vaddr_pfns() instead of
 hand made mapping of virtual address to physical address. Also the
 function leaked page reference from get_user_pages() so fix that by
 properly release the reference when omap_vout_buffer_release() is
 called.

 Signed-off-by: Jan Kara j...@suse.cz
 Signed-off-by: Hans Verkuil hans.verk...@cisco.com
 [hans.verk...@cisco.com: remove unused struct omap_vout_device *vout
 variable]

 Signed-off-by: Mauro Carvalho Chehab mche...@osg.samsung.com

 diff --git a/drivers/media/platform/omap/omap_vout.c
 b/drivers/media/platform/omap/omap_vout.c index
 f09c5f17a42f..7feb6394f111
 100644
 --- a/drivers/media/platform/omap/omap_vout.c
 +++ b/drivers/media/platform/omap/omap_vout.c
 @@ -195,46 +195,34 @@ static int omap_vout_try_format(struct
 v4l2_pix_format *pix) }

  /*

 - * omap_vout_uservirt_to_phys: This inline function is used to convert
 user - * space virtual address to physical address.
 + * omap_vout_get_userptr: Convert user space virtual address to physical
 + * address.

   */

 -static unsigned long omap_vout_uservirt_to_phys(unsigned long virtp)
 +static int omap_vout_get_userptr(struct videobuf_buffer *vb, u32 virtp,
 +   u32 *physp)

  {

 -  unsigned long physp = 0;
 -  struct vm_area_struct *vma;
 -  struct mm_struct *mm = current-mm;
 +  struct frame_vector *vec;
 +  int ret;

/* For kernel direct-mapped memory, take the easy way */

 -  if (virtp = PAGE_OFFSET)
 -  return virt_to_phys((void *) virtp);
 -
 -  down_read(current-mm-mmap_sem);
 -  vma = find_vma(mm, virtp);
 -  if (vma  (vma-vm_flags  VM_IO)  vma-vm_pgoff) {
 -  /* this will catch, kernel-allocated, mmaped-to-usermode
 - addresses */
 -  physp = (vma-vm_pgoff  PAGE_SHIFT) + (virtp - vma-vm_start);
 -  up_read(current-mm-mmap_sem);
 -  } else {
 -  /* otherwise, use get_user_pages() for general userland pages */
 -  int res, nr_pages = 1;
 -  struct page *pages;
 +  if (virtp = PAGE_OFFSET) {
 +  *physp = virt_to_phys((void *)virtp);

 Lovely. virtp comes from userspace and as far as I know it arrives here
 completely unchecked. The problem isn't introduced by this patch, but
 omap_vout buffer management seems completely broken to me, and nobody
 seems to care about the driver. Given that omapdrm should now provide the
 video output capabilities that are missing from omapfb and resulted in
 the development of omap_vout, shouldn't we drop the omap_vout driver ?

 Tomi, any opinion on this ? Do you see any omap_vout capability missing
 from omapdrm ?

 I've never used omap_vout, so I don't even know what it offers. I do
 know it supports VRFB rotation (omap3), which we don't have on omapdrm,
 though. Whether anyone uses that (or omap_vout), I have no idea...

 I'd personally love to get rid of omap_vout, as it causes complications
 on omapfb/omapdss side.
 
 How difficult would it be to implement VRFB rotation in omapdrm ?

I don't know... drivers/video/fbdev/omap2/vrfb.c contains the code to
handle the VRFB hardware, but it does require certain amount of book
keeping and tweaking of the fb size etc from the omapdrm side. And
probably a new parameter to somehow enable vrfb for a buffer, as you
don't want it to be used by default.

 Tomi



signature.asc
Description: OpenPGP digital signature


Re: [PATCH 2/9] [media] media: omap_vout: Convert omap_vout_uservirt_to_phys() to use get_vaddr_pfns()

2015-06-12 Thread Tomi Valkeinen


On 11/06/15 07:21, Laurent Pinchart wrote:
 Hello,
 
 (CC'ing Tomi Valkeinen)
 
 On Wednesday 10 June 2015 06:20:45 Mauro Carvalho Chehab wrote:
 From: Jan Kara j...@suse.cz

 Convert omap_vout_uservirt_to_phys() to use get_vaddr_pfns() instead of
 hand made mapping of virtual address to physical address. Also the
 function leaked page reference from get_user_pages() so fix that by
 properly release the reference when omap_vout_buffer_release() is
 called.

 Signed-off-by: Jan Kara j...@suse.cz
 Signed-off-by: Hans Verkuil hans.verk...@cisco.com
 [hans.verk...@cisco.com: remove unused struct omap_vout_device *vout
 variable]

 Signed-off-by: Mauro Carvalho Chehab mche...@osg.samsung.com

 diff --git a/drivers/media/platform/omap/omap_vout.c
 b/drivers/media/platform/omap/omap_vout.c index f09c5f17a42f..7feb6394f111
 100644
 --- a/drivers/media/platform/omap/omap_vout.c
 +++ b/drivers/media/platform/omap/omap_vout.c
 @@ -195,46 +195,34 @@ static int omap_vout_try_format(struct v4l2_pix_format
 *pix) }

  /*
 - * omap_vout_uservirt_to_phys: This inline function is used to convert user
 - * space virtual address to physical address.
 + * omap_vout_get_userptr: Convert user space virtual address to physical
 + * address.
   */
 -static unsigned long omap_vout_uservirt_to_phys(unsigned long virtp)
 +static int omap_vout_get_userptr(struct videobuf_buffer *vb, u32 virtp,
 + u32 *physp)
  {
 -unsigned long physp = 0;
 -struct vm_area_struct *vma;
 -struct mm_struct *mm = current-mm;
 +struct frame_vector *vec;
 +int ret;

  /* For kernel direct-mapped memory, take the easy way */
 -if (virtp = PAGE_OFFSET)
 -return virt_to_phys((void *) virtp);
 -
 -down_read(current-mm-mmap_sem);
 -vma = find_vma(mm, virtp);
 -if (vma  (vma-vm_flags  VM_IO)  vma-vm_pgoff) {
 -/* this will catch, kernel-allocated, mmaped-to-usermode
 -   addresses */
 -physp = (vma-vm_pgoff  PAGE_SHIFT) + (virtp - vma-vm_start);
 -up_read(current-mm-mmap_sem);
 -} else {
 -/* otherwise, use get_user_pages() for general userland pages */
 -int res, nr_pages = 1;
 -struct page *pages;
 +if (virtp = PAGE_OFFSET) {
 +*physp = virt_to_phys((void *)virtp);
 
 Lovely. virtp comes from userspace and as far as I know it arrives here 
 completely unchecked. The problem isn't introduced by this patch, but 
 omap_vout buffer management seems completely broken to me, and nobody seems 
 to 
 care about the driver. Given that omapdrm should now provide the video output 
 capabilities that are missing from omapfb and resulted in the development of 
 omap_vout, shouldn't we drop the omap_vout driver ?
 
 Tomi, any opinion on this ? Do you see any omap_vout capability missing from 
 omapdrm ?

I've never used omap_vout, so I don't even know what it offers. I do
know it supports VRFB rotation (omap3), which we don't have on omapdrm,
though. Whether anyone uses that (or omap_vout), I have no idea...

I'd personally love to get rid of omap_vout, as it causes complications
on omapfb/omapdss side.

 Tomi



signature.asc
Description: OpenPGP digital signature


Re: [PATCH v7 1/3] of: Decrement refcount of previous endpoint in of_graph_get_next_endpoint

2015-01-13 Thread Tomi Valkeinen
On 23/12/14 15:09, Philipp Zabel wrote:
 Decrementing the reference count of the previous endpoint node allows to
 use the of_graph_get_next_endpoint function in a for_each_... style macro.
 All current users of this function that pass a non-NULL prev parameter
 (coresight, rcar-du, imx-drm, soc_camera, and omap2-dss) are changed to
 not decrement the passed prev argument's refcount themselves.
 
 Signed-off-by: Philipp Zabel p.za...@pengutronix.de
 Acked-by: Mauro Carvalho Chehab mche...@osg.samsung.com
 Acked-by: Mathieu Poirier mathieu.poir...@linaro.org
 ---
 Changes since v6:
  - Added omap2-dss.
  - Added Mathieu's ack.
 ---
  drivers/coresight/of_coresight.c  | 13 ++---
  drivers/gpu/drm/imx/imx-drm-core.c| 13 ++---
  drivers/gpu/drm/rcar-du/rcar_du_kms.c | 15 ---
  drivers/media/platform/soc_camera/soc_camera.c|  3 ++-
  drivers/of/base.c |  9 +
  drivers/video/fbdev/omap2/dss/omapdss-boot-init.c |  7 +--
  6 files changed, 12 insertions(+), 48 deletions(-)
 

For omapdss:

Acked-by: Tomi Valkeinen tomi.valkei...@ti.com

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH 3/3] driver:gpio remove all usage of gpio_remove retval in driver

2014-07-29 Thread Tomi Valkeinen
On 22/07/14 18:08, Linus Walleij wrote:
 On Sat, Jul 12, 2014 at 10:30 PM, abdoulaye berthe berthe...@gmail.com 
 wrote:
 
 Heads up. Requesting ACKs for this patch or I'm atleast warning that it will 
 be
 applied. We're getting rid of the return value from gpiochip_remove().

  drivers/video/fbdev/via/via-gpio.c | 10 +++---
 
 Tomi can you ACK this?

Acked-by: Tomi Valkeinen tomi.valkei...@ti.com

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-21 Thread Tomi Valkeinen
On 20/03/14 19:01, Grant Likely wrote:

 I think depending on a generic graph walk is where I have the biggest
 concern about the design. I don't think it is a good idea for the master
 device to try a generic walk over the graph looking for other devices
 that might be components because it cannot know whether or not further
 links are relevant, or even if other endpoint nodes in the target
 hierarchy actually conform to the graph binding in the same way.
 
 Consider the example of a system with two video controllers (one
 embedded and one discrete), a display mux, and a panel. The display
 controller depends on the mux, and the mux depends on the panel. It
 would be entirely reasonable to start up the display subsystem with the
 embedded controller without the discrete adapter being available, but
 the way the current graph pattern is proposed there is no dependency
 information between the devices.

For some reason I don't understand this master and dependency way of
thinking. I just can't twist my brain to it, please help me =).

With the setup you describe above, why does the video controller depend
on the mux, and why it is the master? Why the DMA engine does not depend
on the embedded video controller, and why is the DMA engine not the master?

With the setup above, what are we really interested in? It's the
display, right? We want to have the display working, with resolution and
video timings that work for the display. The mux and the display
controllers are just necessary evils to make the display work. The
display depends on the mux to provide it a video stream. The mux depends
on the two video controllers to provide the mux two video streams. The
video controllers depend (possibly) on SoC's DMA, or PCI bus to provide
them video data.

And if we consider the same setup as above, but the mux has two
exclusive outputs, it again works fine with the dependency I described.
If you want to enable panel 1, you'll depend on mux and video
controllers, but not on panel 2. So you can leave the panel 2 driver out
and things still work ok.

But note that I don't think this dependency has strict relation to the
DT graph representation, see below.

 I really do think the dependency direction needs to be explicit so that
 a driver knows whether or not a given link is relevant for it to start,

I think that comes implicitly from the driver, it doesn't need to be
described in the DT. If a device has an input port, and the device is
configured to use that input port, the device depends on whatever is on
the other side of the input port. The device driver must know that.

Somehow a device driver needs to find if the driver behind its input
ports are ready. It could use the links in DT directly, if they are
supplied in that direction, or it could rely on someone else parsing the
DT, and exposing the information via some API.

I think it's simpler for the SW to follow the links directly, but that
would mean having the links in the opposite direction than the data
flow, which feels a bit odd to me.

 and there must be driver know that knows how to interpret the target
 node. A device that is a master needs to know which links are
 dependencies, and which are not.

Well, again, I may not quite understand what the master means here. But
for me, the display is the master of the pipeline. The driver for the
display is the one that has to decide what kind of video signal is
acceptable, how the signal must be enabled, and disabled, etc.

When someone (the master's master =) tells the panel to enable itself,
the panel needs to use an API to configure and enable its input ports.
The devices on the other end of the input ports then configure and
enable their inputs. And so on.

Anyway, I do think this is more of a SW organization topic than how we
should describe the hardware. As I see it, the parent-child
relationships in the DT describe the control paths and the graph
describes the data paths. Having the data paths described in the
direction of data flow (or double-linked in case of bi-dir link) sounds
logical to me, but I think the inverse could work fine too.

But using some kind of CPU centric direction doesn't sound very usable,
it makes no sense for cases with peripheral-to-peripheral links, and the
control bus already describes the CPU centric direction in cases where
there exists a clear CPU-to-peripheral direction.

 I'm not even talking about the bi-directional link issue. This issue
 remains regardless of whether or not bidirectional links are used.
 
 I would solve it in one of the following two ways:
 
 1) Make masters only look at one level of dependency. Make the component
 driver responsible for checking /its/ dependencies. If its dependencies
 aren't yet met, then don't register the component as ready. We could
 probably make the drivers/base/component code allow components to pull
 in additional components as required. This approach shouldn't even
 require a change to the binding and eliminates any need for 

Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-21 Thread Tomi Valkeinen
On 21/03/14 13:47, Grant Likely wrote:

 I'm firm on the opinion that the checking must also happen at runtime.
 The biggest part of my objection has been how easy it would be to get a
 linkage out of sync, and dtc is not necessarily the last tool to touch
 the dtb before the kernel gets booted. I want the kernel to flat out
 reject any linkage that is improperly formed.

Isn't it trivial to verify it with the current v4l2 bindings? And
endpoint must have a 'remote-endpoint' property, and the endpoint on the
other end must have similar property, pointing in the first endpoint.
Anything else is an error.

I agree that it's easier to write bad links in the dts with
double-linking than with single-linking, but it's still trivial to
verify it in the kernel.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH v4 1/3] [media] of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-21 Thread Tomi Valkeinen
On 21/03/14 00:32, Laurent Pinchart wrote:

 The OF graph bindings documentation could just specify the ports node as 
 optional and mandate individual device bindings to specify it as mandatory or 
 forbidden (possibly with a default behaviour to avoid making all device 
 bindings too verbose).

Isn't it so that if the device has one port, it can always do without
'ports', but if it has multiple ports, it always has to use 'ports' so
that #address-cells and #size-cells can be defined?

If so, there's nothing left for the individual device bindings to decide.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH v4 1/3] [media] of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-21 Thread Tomi Valkeinen
On 21/03/14 16:13, Laurent Pinchart wrote:
 Hi Tomi,
 
 On Friday 21 March 2014 15:37:17 Tomi Valkeinen wrote:
 On 21/03/14 00:32, Laurent Pinchart wrote:
 The OF graph bindings documentation could just specify the ports node as
 optional and mandate individual device bindings to specify it as mandatory
 or forbidden (possibly with a default behaviour to avoid making all
 device bindings too verbose).

 Isn't it so that if the device has one port, it can always do without
 'ports', but if it has multiple ports, it always has to use 'ports' so
 that #address-cells and #size-cells can be defined?
 
 You can put the #address-cells and #size-cells property in the device node 
 directly without requiring a ports subnode.

Ah, right. So 'ports' is only needed when the device node has other
children nodes than the ports and those nodes need different
#address-cells and #size-cells than the ports.

In that case it sounds fine to leave it for the driver bindings to decide.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [GIT PULL] Move device tree graph parsing helpers to drivers/of

2014-03-18 Thread Tomi Valkeinen
On 18/03/14 01:30, Laurent Pinchart wrote:

 I agree with you. I know that DT bindings review takes too much time, slows 
 development down and is just generally painful. I'm trying to reply to this e-
 mail thread as fast as possible, but I'm also busy with other tasks :-/
 
 The lack of formal consensus comes partly from the fact that people are busy 
 and that the mail thread is growing big. There's still two open questions 
 from 
 my view of the whole discussion:
 
 - Do we really want to drop bidirectional links ? Grant has been pretty vocal 
 about that, but there has been several replies with arguments for 
 bidirectional links, and no reply from him afterwards. Even though that 
 wouldn't be the preferred solution for everybody, there doesn't seem to be a 
 strong disagreement about dropping bidirectional links, as long as we can 
 come 
 up with a reasonable implementation.
 
 - If we drop bidirectional links, what link direction do we use ? There has 
 been several proposals (including north, which I think isn't future-proof 
 as 
 it assumes an earth-centric model) and no real agreement, although there 
 seems 
 to be a consensus among several developers that the core OF graph bindings 
 could leave that to be specified by subsystem bindings. We would still have 
 to 
 agree on a direction for the display subsystem of course.
 
 If my above explanation isn't too far from the reality the next step could be 
 to send a new version of the DT bindings proposal as a ping.

I agree with the above.

However, I also think we should just go forward with the bidirectional
links for now. The bindings for bidir links are already in the mainline
kernel, so they can't be seen as broken.

When we have an agreement about the direction, and we've got common
parsing code, it's trivial to convert the existing links to single
direction links, and the old dts files with bidir links continue to work
fine.

This is what I'm planning to do with OMAP display subsystem, as I
_really_ want to get the DT support merged for 3.15. The current mix of
pdata + DT that we have for OMAP display is an unmaintainable mess.

So unless I get a nack from someone (I've pinged Grant twice about
this), or someone explains why it's a bad idea, I'll push the OMAP
display bindings [1] for 3.15 with bidir bindings, and change them to
single-dir later.

Note that I did remove the abbreviated endpoint format that I had there
earlier, so now the bindings are fully compatible with the v4l2 bindings.

 Tomi

[1] http://article.gmane.org/gmane.linux.drivers.devicetree/63885




signature.asc
Description: OpenPGP digital signature


Re: [GIT PULL] Move device tree graph parsing helpers to drivers/of

2014-03-14 Thread Tomi Valkeinen
Hi Philipp, Grant,

On 14/03/14 14:19, Philipp Zabel wrote:

 People completely disagree about the direction the phandle links should
 point in. I am still of the opinion that the generic binding should describe
 just the topology, that the endpoint links in the kernel should represent an
 undirected graph and the direction of links should not matter at all for the
 generic graph bindings.

 I would also not mandate a specific direction at the of-graph level and 
 leave 
 it to subsystems (or possibly drivers) to specify the direction.
 
 Thank you. Can everybody live with this?

Yes, I'd like to reserve the possibility for double-linking. If the
endpoint links are used to tell the dataflow direction, then
double-linking could be used for bi-directional dataflows.

But this doesn't help much for the video drivers under work, which I
think we are all most interested in at the moment. We still need to
decide how we link the endpoint for those.

I'd like to go forward with the mainline v4l2 style double-linking, as
that is already in use. It would allow us to proceed _now_, and maybe
even get display support to 3.15. Otherwise this all gets delayed for
who knows how long, and the displays in question cannot be used by the
users.

Deprecating the other link later from the existing video bindings would
be trivial, as there would basically be nothing to do except remove the
other link.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-12 Thread Tomi Valkeinen
On 12/03/14 12:25, Russell King - ARM Linux wrote:
 On Mon, Mar 10, 2014 at 02:52:53PM +0100, Laurent Pinchart wrote:
 In theory unidirectional links in DT are indeed enough. However, let's not 
 forget the following.

 - There's no such thing as single start points for graphs. Sure, in some 
 simple cases the graph will have a single start point, but that's not a 
 generic rule. For instance the camera graphs 
 http://ideasonboard.org/media/omap3isp.ps and 
 http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus 
 two 
 starting points from a data flow point of view.
 
 I think we need to stop thinking of a graph linked in terms of data
 flow - that's really not useful.
 
 Consider a display subsystem.  The CRTC is the primary interface for
 the CPU - this is the most interesting interface, it's the interface
 which provides access to the picture to be displayed for the CPU.  Other
 interfaces are secondary to that purpose - reading the I2C DDC bus for
 the display information is all secondary to the primary purpose of
 displaying a picture.
 
 For a capture subsystem, the primary interface for the CPU is the frame
 grabber (whether it be an already encoded frame or not.)  The sensor
 devices are all secondary to that.
 
 So, the primary software interface in each case is where the data for
 the primary purpose is transferred.  This is the point at which these
 graphs should commence since this is where we would normally start
 enumeration of the secondary interfaces.
 
 V4L2 even provides interfaces for this: you open the capture device,
 which then allows you to enumerate the capture device's inputs, and
 this in turn allows you to enumerate their properties.  You don't open
 a particular sensor and work back up the tree.

We do it the other way around in OMAP DSS. It's the displays the user is
interested in, so we enumerate the displays, and if the user wants to
enable a display, we then follow the links from the display towards the
SoC, configuring and enabling the components on the way.

I don't have a strong opinion on the direction, I think both have their
pros. In any case, that's more of a driver model thing, and I'm fine
with linking in the DT outwards from the SoC (presuming the
double-linking is not ok, which I still like best).

 I believe trying to do this according to the flow of data is just wrong.
 You should always describe things from the primary device for the CPU
 towards the peripheral devices and never the opposite direction.

In that case there's possibly the issue I mentioned in other email in
this thread: an encoder can be used in both a display and a capture
pipeline. Describing the links outwards from CPU means that sometimes
the encoder's input port is pointed at, and sometimes the encoder's
output port is pointed at.

That's possibly ok, but I think Grant was of the opinion that things
should be explicitly described in the binding documentation: either a
device's port must contain a 'remote-endpoint' property, or it must not,
but no sometimes. But maybe I took his words too literally.

Then there's also the audio example Philipp mentioned, where there is no
clear outward from Soc direction for the link, as the link was
bi-directional and between two non-SoC devices.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-11 Thread Tomi Valkeinen
On 11/03/14 13:43, Laurent Pinchart wrote:

 We could scan the whole tree for entities, ports and endpoints once, in
 the base oftree code, and put that into a graph structure, adding the
 backlinks.
 The of_graph_* helpers could then use that graph instead of the device
 tree.
 
 That could work. The complexity would still be quadratic, but we would parse 
 the full device tree once only.
 
 The runtime complexity would still be increased, as the graph helpers would 
 need to find the endpoint object in the parsed graph corresponding to the DT 
 node they get as an argument. That's proportional to the number of graph 
 elements, not the total number of DT nodes, so I suppose it's not too bad.
 
 We also need to make sure this would work with insertion of DT fragments at 
 runtime. Probably not a big deal, but it has to be kept in mind.

About the endpoint linking direction... As I think was suggested, the
base logic would be to make endpoints point outward from the SoC, i.e.
a display controller would point to a panel, and a capture controller
would point to a sensor.

But how about this case:

I have a simple video pipeline with a display controller, an encoder and
a panel, as follows:

dispc - encoder - panel

Here the arrows show which way the remote-endpoint links point. So
looking at the encoder, the encoder's input port is pointed at by the
dispc, and the encoder's output port points at the panel.

Then, I have a capture pipeline, with a capture controller, an encoder
(the same one that was used for display above) and a sensor, as follows:

camc - encoder - sensor

Again the arrows show the links. Note that here the encoder's _output_
port is pointed at by the camc, and the encoder's _input_ port points at
the sensor.

So depending on the use case, the endpoints would point to opposite
direction from the encoder's point of view.

And if I gathered Grant's opinion correctly (correct me if I'm wrong),
he thinks things should be explicit, i.e. the bindings for, say, an
encoder should state that the encoder's output endpoint _must_ contain a
remote-endpoint property, whereas the encoder's input endpoint _must
not_ contain a remote-endpoint property.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-11 Thread Tomi Valkeinen
On 11/03/14 15:16, Laurent Pinchart wrote:

 And if I gathered Grant's opinion correctly (correct me if I'm wrong),
 he thinks things should be explicit, i.e. the bindings for, say, an
 encoder should state that the encoder's output endpoint _must_ contain a
 remote-endpoint property, whereas the encoder's input endpoint _must
 not_ contain a remote-endpoint property.
 
 Actually my understand was that DT links would have the same direction as the 
 data flow. There would be no ambiguity in that case as the direction of the 

Ok. At least at some point in the discussion the rule of thumb was to
have the master point at the slave, which are a bit vague terms, but
I think both display and camera controllers were given as examples of
master.

 data flow is known. What happens for bidirectional data flows still need to 
 be 
 discussed though. And if we want to use the of-graph bindings to describe 
 graphs without a data flow, a decision will need to be taken there too.

Yep. My worry is that if the link direction is defined in the bindings
for the device, somebody will at some point have a complex board which
happens to use two devices in such a way, that either neither of them
point to each other, or both point to each other.

Maybe we can make sure that never happens, but I feel a bit nervous
about it especially for bi-directional cases. If the link has no clear
main-dataflow direction, it may be a bit difficult to say which way to link.

With double-linking all those concerns just disappear.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH v4 1/3] [media] of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-10 Thread Tomi Valkeinen
On 08/03/14 14:23, Grant Likely wrote:

 That's fine. In that case the driver would specifically require the
 endpoint to be that one node although the above looks a little weird

 The driver can't require that. It's up to the board designer to decide
 how many endpoints are used. A driver may say that it has a single input
 port. But the number of endpoints for that port is up to the use case.
 
 Come now, when you're writing a driver you know if it will ever be
 possible to have more than one port. If that is the case then the
 binding should be specifically laid out for that. If there will never be
 multiple ports and the binding is unambiguous, then, and only then,
 should the shortcut be used, and only the shortcut should be accepted.

I was talking about endpoints, not ports. There's no unclarity about the
number of ports, that comes directly from the hardware for that specific
component. The number of endpoints, however, come from the board
hardware. The driver writer cannot know that.

 to me. I would recommend that if there are other non-port child nodes
 then the ports should still be encapsulated by a ports node.  The device
 binding should not be ambiguous about which nodes are ports.

 Hmm, ambiguous in what way?
 
 Parsing the binding now consists of a ladder of 'ifs' that gives three
 distinct different behaviours for no benefit. You don't want that in

It considerably lessens the amount of text in the DT for many use cases,
making it easier to write and maintain the dts files.

 bindings because it makes it more difficult to get the parsing right in
 the first place, and to make sure that all users parse it in the same
 way (Linux, U-Boot, BSD, etc). Bindings should be as absolutely simple
 as possible.

Well, yes, I agree there. This is not the simplest of bindings. I'd be
more than happy if we would come up with simpler version of this, which
would still allow us to have the same descriptive power.

 Just to be clear, I have no problem with having the option in the
 pattern, but the driver needs to be specific about what layout it
 expects.

If we forget the shortened endpoint format, I think it can be quite
specific.

A device has either one port, in which case it should require the
'ports' node to be omitted, or the device has more than one port, in
which case it should require 'ports' node.

Note that the original v4l2 binding doc says that 'ports' is always
optional.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH v4 3/3] Documentation: of: Document graph bindings

2014-03-10 Thread Tomi Valkeinen
On 08/03/14 14:25, Grant Likely wrote:

 Sure. If endpoints are logical, then only create the ones actually
 hooked up. No problem there. But nor do I see any issue with having
 empty connections if the board author things it makes sense to have them
 in the dtsi.

I don't think they are usually logical, although they probably might be
in some cases.

As I see it, a port is a group of pins in a hardware component, and
two endpoints define a connection between two ports, which on the HW
level are the wires between the ports.

So a port with two endpoints is a group of pins, with wires that go from
the same pins to two different components.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH v4 1/3] [media] of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-10 Thread Tomi Valkeinen
On 10/03/14 10:58, Andrzej Hajda wrote:

 I want to propose another solution to simplify bindings, in fact I have
 few ideas to consider:
 
 1. Use named ports instead of address-cells/regs. Ie instead of
 port@number schema, use port-function. This will allow to avoid ports
 node and #address-cells, #size-cells, reg properties.
 Additionally it should increase readability of the bindings.
 
 device {
   port-dsi {
   endpoint { ... };
   };
   port-rgb {
   endpoint { ... };
   };
 };
 
 It is little bit like with gpios vs reset-gpios properties.
 Another advantage I see we do not need do mappings of port numbers
 to functions between dts, drivers and documentation.

That makes it more difficult to iterate the ports. You need to go
through all the nodes and use partial name matching. I think for things
like gpios, the driver always gives the full name, so there's no need
for any kind of partial matching or searching.

It looks nice when just looking at the DT, though.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-10 Thread Tomi Valkeinen
On 08/03/14 13:41, Grant Likely wrote:

 Ok. If we go for single directional link, the question is then: which
 way? And is the direction different for display and camera, which are
 kind of reflections of each other?
 
 In general I would recommend choosing whichever device you would
 sensibly think of as a master. In the camera case I would choose the
 camera controller node instead of the camera itself, and in the display
 case I would choose the display controller instead of the panel. The
 binding author needs to choose what she things makes the most sense, but
 drivers can still use if it it turns out to be 'backwards'

I would perhaps choose the same approach, but at the same time I think
it's all but clear. The display controller doesn't control the panel any
more than a DMA controller controls, say, the display controller.

In fact, in earlier versions of OMAP DSS DT support I had a simpler port
description, and in that I had the panel as the master (i.e. link from
panel to dispc) because the panel driver uses the display controller's
features to provide the panel device a data stream.

And even with the current OMAP DSS DT version, which uses the v4l2 style
ports/endpoints, the driver model is still the same, and only links
towards upstream are used.

So one reason I'm happy with the dual-linking is that I can easily
follow the links from the downstream entities to upstream entities, and
other people, who have different driver model, can easily do the opposite.

But I agree that single-linking is enough and this can be handled at
runtime, even if it makes the code more complex. And perhaps requires
extra data in the dts, to give the start points for the graph.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-10 Thread Tomi Valkeinen
On 10/03/14 15:52, Laurent Pinchart wrote:

 In theory unidirectional links in DT are indeed enough. However, let's not 
 forget the following.
 
 - There's no such thing as single start points for graphs. Sure, in some 
 simple cases the graph will have a single start point, but that's not a 
 generic rule. For instance the camera graphs 
 http://ideasonboard.org/media/omap3isp.ps and 
 http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus two 
 starting points from a data flow point of view. And if you want a better 
 understanding of how complex media graphs can become, have a look at 
 http://ideasonboard.org/media/vsp1.0.pdf (that's a real world example, albeit 
 all connections are internal to the SoC in that particular case, and don't 
 need to be described in DT).
 
 - There's also no such thing as a master device that can just point to slave 
 devices. Once again simple cases exist where that model could work, but real 
 world examples exist of complex pipelines with dozens of elements all 
 implemented by a separate IP core and handled by separate drivers, forming a 
 graph with long chains and branches. We thus need real graph bindings.
 
 - Finally, having no backlinks in DT would make the software implementation 
 very complex. We need to be able to walk the graph in a generic way without 
 having any of the IP core drivers loaded, and without any specific starting 
 point. We would thus need to parse the complete DT tree, looking at all nodes 
 and trying to find out whether they're part of the graph we're trying to 
 walk. 
 The complexity of the operation would be at best quadratic to the number of 
 nodes in the whole DT and to the number of nodes in the graph.

I did use plural when I said to give the start points

If you have a list of starting points in the DT, a graph helper or
something could create a runtime representation of the graph at some
early phase during the boot, which would include backlinks. The
individual drivers could use that runtime graph, instead of the DT graph.

But it still sounds considerably more complex than double-links in DT.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH v4 3/3] Documentation: of: Document graph bindings

2014-03-08 Thread Tomi Valkeinen
On 07/03/14 20:11, Grant Likely wrote:

 Any board not using that port can just leave the endpoint disconnected.

 Hmm I see. I'm against that.

 I think the SoC dtsi should not contain endpoint node, or even port node
 (at least usually). It doesn't know how many endpoints, if any, a
 particular board has. That part should be up to the board dts.
 
 Why? We have established precedence for unused devices still being in
 the tree. I really see no issue with it.

I'm fine with having ports defined in the SoC dtsi. A port is a physical
thing, a group of pins, for example.

But an endpoint is a description of the other end of a link. To me, a
single endpoint makes no sense, there has to be a pair of endpoints. The
board may need 0 to n endpoints, and the SoC dtsi cannot know how many
are needed.

If the SoC dtsi defines a single endpoint for a port, and the board
needs to use two endpoints for that port, it gets really messy: one
endpoint is defined in the SoC dtsi, and used in the board dts. The
second endpoint for the same port needs to be defined separately in the
board file. I.e. something like:

/* the first ep */
port1_ep {
remote-endpoint = ..;
};

port1 {
/* the second ep */
endpoint@2 {
remote-endpoint = ..;
};
};

Versus:

port1 {
/* the first ep */
endpoint@1 {
remote-endpoint = ..;
};

/* the second ep */
endpoint@2 {
remote-endpoint = ..;
};
};

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-08 Thread Tomi Valkeinen
On 07/03/14 19:05, Grant Likely wrote:
 On Wed, 26 Feb 2014 15:48:49 +0100, Philipp Zabel p.za...@pengutronix.de 
 wrote:
 Hi Grant,

 thank you for the comments.
 
 Hi Philipp,
 
 I've got lots of comments and quesitons below, but I must say thank you
 for doing this. It is a helpful description.

Thanks for the comments. I'll answer from my point of view, which may be
different than Philipp's.

 Bear with me, I'm going to comment on each point. I'm not criticizing,
 but I do want to understand the HW design. In particular I want to
 understand why the linkage is treated as bidirectional instead of one
 device being the master.
 
 In this specific example, what would be managing the transfer? Is there
 an overarching driver that assembles the pieces, or would you expect
 individual drivers to find each other?

The direction of the dataflow depends on the case. For camera, the data
flows from external components towards the SoC. For display, vice versa.
I don't know much about camera, but for display there are bi-directional
buses, like MIPI DBI and MIPI DSI. However, in those cases, the main
flow direction is clear, towards the display.

Then again, these are meant to be generic graph bindings, and the
dataflow could be fully bi-directional.

As for the driver model (does it matter when we are talking about DT
bindings?), I think there are very different opinions. My thought for
display is that there is always an overarching driver, something that
manages the video pipeline as a whole.

However, each of the individual drivers in the video pipeline will
control its input ports. So, say, a panel driver gets a command from the
overarching control driver to enable the panel device. The panel driver
reacts by calling the encoder (i.e. the one that outputs pixels to the
panel), commanding it to enable the video stream.

I know some (many?) people think the other way around, that the encoder
commands the panel to enable itself. I don't think that's versatile
enough. As a theoretical, slightly exaggerated, example, we could have a
panel that wants the pixel clock to be enabled for a short while before
enabling the actual pixel stream. And maybe the panel driver needs to do
something before the pixel stream is enabled (say, send a command to the
panel).

The above is very easy to do if the panel driver is in control of the
received video stream. It's rather difficult to do if the encoder is in
control.

Also, I think a panel (or any encoder) can be compared to a display
controller: a display controller uses DMA to transfer pixel data from
the memory to the display controller. A panel or encoder uses the
incoming video bus to transfer pixel data from the previous display
component to the panel or encoder device. And I don't think anyone says
the DMA driver should be in control of the display controller.

But this is going quite a bit into the SW architecture. Does it matter
when we are talking about the representation of HW in the DT data?

 In all of the above examples I see a unidirectional data flow. There are
 producer ports and consumer ports. Is there any (reasonable) possibility
 of peer ports that are bidirectional?

Yes, as I mentioned above. However, I do believe that at the moment all
the cases have a clear a main direction. But then, if these are generic
bindings, do we want to rely on that?

 According to video-interfaces.txt, it is expected that endpoints contain
 phandles pointing to the remote endpoint on both sides. I'd like to
 leave this up to the more specialized bindings, but I can see that this
 makes enumerating the connections starting from each device tree node
 easier, for example at probe time.
 
 This has come up in the past. That approach comes down to premature
 optimization at the expense of making the data structure more prone to
 inconsistency. I consider it to be a bad pattern.
 
 Backlinks are good for things like dynamically linked lists that need to
 be fast and the software fully manages the links. For a data structure like
 the FDT it is better to have the data in one place, and one place only.
 Besides, computers are *good* at parsing data structures. :-)
 
 I appreciate that existing drivers may be using the backlinks, but I
 strongly recommend not doing it for new users.

Ok. If we go for single directional link, the question is then: which
way? And is the direction different for display and camera, which are
kind of reflections of each other?

 Another thought. In terms of the pattern, I would add a recommendation
 that there should be a way to identify ports of a particular type. ie.
 If I were using the pattern to implement an patch bay of DSP filters,
 where each input and output, then each target node should have a unique
 identifier property analogous to interrupt-controller or
 gpio-controller. In this fictitious example I would probably choose
 audiostream-input-port and audiostream-output-port as empty
 properties in the port nodes. (I'm not suggesting a change to 

Re: [PATCH v4 1/3] [media] of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-08 Thread Tomi Valkeinen
On 07/03/14 19:18, Grant Likely wrote:

 From a pattern perspective I have no problem with that From an
 individual driver binding perspective that is just dumb! It's fine for
 the ports node to be optional, but an individual driver using the
 binding should be explicit about which it will accept. Please use either
 a flag or a separate wrapper so that the driver can select the
 behaviour.

Why is that? The meaning of the DT data stays the same, regardless of
the existence of the 'ports' node. The driver uses the graph helpers to
parse the port/endpoint data, so individual drivers don't even have to
care about the format used.

As I see it, the graph helpers should allow the drivers to iterate the
ports and the endpoints for a port. These should work the same way, no
matter which abbreviated format is used in the dts.

 The helper should find the two endpoints in both cases.
 Tomi suggests an even more compact form for devices with just one port:

  device {
  endpoint { ... };

  some-other-child { ... };
  };
 
 That's fine. In that case the driver would specifically require the
 endpoint to be that one node although the above looks a little weird

The driver can't require that. It's up to the board designer to decide
how many endpoints are used. A driver may say that it has a single input
port. But the number of endpoints for that port is up to the use case.

 to me. I would recommend that if there are other non-port child nodes
 then the ports should still be encapsulated by a ports node.  The device
 binding should not be ambiguous about which nodes are ports.

Hmm, ambiguous in what way?

If the dts uses 'ports' node, all the ports and endpoints are inside
that 'ports' node. If there is no 'ports' node, there may be one or more
'port' nodes, which then contain endpoints. If there are no 'port'
nodes, there may be a single 'endpoint' node.

True, there are many ifs there. But I don't think it's ambiguous. The
reason we have these abbreviations is that the full 'ports' node is not
needed that often, and it is rather verbose. In almost all the use
cases, panels and connectors can use the single endpoint format.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-03-08 Thread Tomi Valkeinen
On 07/03/14 19:06, Grant Likely wrote:
 On Thu, 27 Feb 2014 10:36:36 +0200, Tomi Valkeinen tomi.valkei...@ti.com 
 wrote:
 On 26/02/14 16:48, Philipp Zabel wrote:

 I would like the document to acknowledge the difference from the
 phandle+args pattern used elsewhere and a description of when it would
 be appropriate to use this instead of a simpler binding.

 Alright. The main point of this binding is that the devices may have
 multiple distinct ports that each can be connected to other devices.

 The other main point with this binding are multiple endpoints per port.
 So you can have, say, a display controller, with single port, which has
 two endpoints going to two separate LCD panels.

 In physical level that would usually mean that the same pins from the
 display controller are connected to two panels. Most likely this would
 mean that only one panel can be used at a time, possibly with different
 settings (say, 16 RGB pins for one panel, 24 RGB pins for the other).
 
 What device is in control in that scenario?

The endpoints in a single port are exclusive, only one can be active at
a time. So the control for the active path would be no different than in
single panel case (for which people have different opinions).

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH v5 5/7] [media] of: move common endpoint parsing to drivers/of

2014-03-05 Thread Tomi Valkeinen
On 04/03/14 17:47, Philipp Zabel wrote:
 Am Dienstag, den 04.03.2014, 14:21 +0200 schrieb Tomi Valkeinen:
 On 04/03/14 13:36, Philipp Zabel wrote:
 [...]
 Can port_node be NULL? Probably only if something is quite wrong, but
 maybe it's safer to return error in that case.

 both of_property_read_u32 and of_node_put can handle port_node == NULL.
 I'll add a WARN_ONCE here as for of_graph_get_next_endpoint and continue
 on.

 Isn't it better to return an error?
 
 I am not sure. We can still correctly parse the endpoint properties of a
 parentless node. All current users ignore the return value anyway. So as
 long as we still do the memset and and set local_node and id, returning
 an error effectively won't change the current behaviour.

Is the parentless node case an error or not? If it's not, we can handle
it silently and return 0, no WARN needed. If it is an error, we should
return an error, even if nobody is currently handling that (which is a
bug in itself, as the function does return an error value, and callers
should handle it).

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH v6 0/8] Move device tree graph parsing helpers to drivers/of

2014-03-05 Thread Tomi Valkeinen
Hi,

On 05/03/14 11:20, Philipp Zabel wrote:
 Hi,
 
 this version of the OF graph helper move series further addresses a few of
 Tomi's and Sylwester's comments.
 
 Changes since v5:
  - Fixed spelling errors and a wrong device node name in the link section
  - Added parentless previous endpoint's full name to warning
  - Fixed documentation comment for of_graph_parse_endpoint
  - Unrolled for-loop in of_graph_get_remote_port_parent
 
 Philipp Zabel (8):
   [media] of: move graph helpers from drivers/media/v4l2-core to
 drivers/of
   Documentation: of: Document graph bindings
   of: Warn if of_graph_get_next_endpoint is called with the root node
   of: Reduce indentation in of_graph_get_next_endpoint
   [media] of: move common endpoint parsing to drivers/of
   of: Implement simplified graph binding for single port devices
   of: Document simplified graph binding for single port devices
   of: Warn if of_graph_parse_endpoint is called with the root node

So, as I've pointed out, I don't agree with the API, as it's too limited
and I can't use it, but as this series is (mostly) about moving the
current API to a common place, it's fine for me.

Acked-by: Tomi Valkeinen tomi.valkei...@ti.com

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH v5 5/7] [media] of: move common endpoint parsing to drivers/of

2014-03-04 Thread Tomi Valkeinen
Hi Philipp,

On 27/02/14 19:35, Philipp Zabel wrote:
 This patch adds a new struct of_endpoint which is then embedded in struct
 v4l2_of_endpoint and contains the endpoint properties that are not V4L2
 (or even media) specific: the port number, endpoint id, local device tree
 node and remote endpoint phandle. of_graph_parse_endpoint parses those
 properties and is used by v4l2_of_parse_endpoint, which just adds the
 V4L2 MBUS information to the containing v4l2_of_endpoint structure.

snip

 diff --git a/drivers/of/base.c b/drivers/of/base.c
 index 8ecca7a..ba3cfca 100644
 --- a/drivers/of/base.c
 +++ b/drivers/of/base.c
 @@ -1985,6 +1985,37 @@ struct device_node *of_find_next_cache_node(const 
 struct device_node *np)
  }
  
  /**
 + * of_graph_parse_endpoint() - parse common endpoint node properties
 + * @node: pointer to endpoint device_node
 + * @endpoint: pointer to the OF endpoint data structure
 + *
 + * All properties are optional. If none are found, we don't set any flags.
 + * This means the port has a static configuration and no properties have
 + * to be specified explicitly.
 + * The caller should hold a reference to @node.
 + */
 +int of_graph_parse_endpoint(const struct device_node *node,
 + struct of_endpoint *endpoint)
 +{
 + struct device_node *port_node = of_get_parent(node);

Can port_node be NULL? Probably only if something is quite wrong, but
maybe it's safer to return error in that case.

 + memset(endpoint, 0, sizeof(*endpoint));
 +
 + endpoint-local_node = node;
 + /*
 +  * It doesn't matter whether the two calls below succeed.
 +  * If they don't then the default value 0 is used.
 +  */
 + of_property_read_u32(port_node, reg, endpoint-port);
 + of_property_read_u32(node, reg, endpoint-id);

If the endpoint does not have 'port' as parent (i.e. the shortened
format), the above will return the 'reg' of the device node (with
'device node' I mean the main node, with 'compatible' property).

And generally speaking, if struct of_endpoint is needed, maybe it would
be better to return the struct of_endpoint when iterating the ports and
endpoints. That way there's no need to do parsing afterwards, trying
to figure out if there's a parent port node, but the information is
already at hand.

 +
 + of_node_put(port_node);
 +
 + return 0;
 +}
 +EXPORT_SYMBOL(of_graph_parse_endpoint);
 +
 +/**
   * of_graph_get_next_endpoint() - get next endpoint node
   * @parent: pointer to the parent device node
   * @prev: previous endpoint node, or NULL to get first
 diff --git a/include/linux/of_graph.h b/include/linux/of_graph.h
 index 3bbeb60..2b233db 100644
 --- a/include/linux/of_graph.h
 +++ b/include/linux/of_graph.h
 @@ -14,7 +14,21 @@
  #ifndef __LINUX_OF_GRAPH_H
  #define __LINUX_OF_GRAPH_H
  
 +/**
 + * struct of_endpoint - the OF graph endpoint data structure
 + * @port: identifier (value of reg property) of a port this endpoint belongs 
 to
 + * @id: identifier (value of reg property) of this endpoint
 + * @local_node: pointer to device_node of this endpoint
 + */
 +struct of_endpoint {
 + unsigned int port;
 + unsigned int id;
 + const struct device_node *local_node;
 +};

A few thoughts about the iteration, and the API in general.

In the omapdss version I separated iterating ports and endpoints, for
the two reasons:

1) I think there are cases where you may want to have properties in the
port node, for things that are common for all the port's endpoints.

2) if there are multiple ports, I think the driver code is cleaner if
you first take the port, decide what port that is and maybe call a
sub-function, and then iterate the endpoints for that port only.

Both of those are possible with the API in the series, but not very cleanly.

Also, if you just want to iterate the endpoints, it's easy to implement
a helper using the separate port and endpoint iterations.


Then, about the get_remote functions: I think there should be only one
function for that purpose, one that returns the device node that
contains the remote endpoint.

My reasoning is that the ports and endpoints, and their contents, should
be private to the device. So the only use to get the remote is to get
the actual device, to see if it's been probed, or maybe get some video
API for that device.

If the driver model used has some kind of master-driver, which goes
through all the display entities, I think the above is still valid. When
the master-driver follows the remote-link, it still needs to first get
the main device node, as the ports and endpoints make no sense without
the context of the main device node.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH v5 6/7] of: Implement simplified graph binding for single port devices

2014-03-04 Thread Tomi Valkeinen
On 27/02/14 19:35, Philipp Zabel wrote:
 For simple devices with only one port, it can be made implicit.
 The endpoint node can be a direct child of the device node.

snip

 @@ -2105,9 +2112,11 @@ struct device_node *of_graph_get_remote_port_parent(
   /* Get remote endpoint node. */
   np = of_parse_phandle(node, remote-endpoint, 0);
  
 - /* Walk 3 levels up only if there is 'ports' node. */
 + /* Walk 3 levels up only if there is 'ports' node */
   for (depth = 3; depth  np; depth--) {
   np = of_get_next_parent(np);
 + if (depth == 3  of_node_cmp(np-name, port))
 + break;
   if (depth == 2  of_node_cmp(np-name, ports))
   break;
   }

This function becomes a bit funny. Would it be clearer just to do
something like:

np = of_parse_phandle(node, remote-endpoint, 0);

np = of_get_next_parent(np);
if (of_node_cmp(np-name, port) != 0)
return np;

np = of_get_next_parent(np);
if (of_node_cmp(np-name, ports) != 0)
return np;

np = of_get_next_parent(np);
return np;

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH v5 5/7] [media] of: move common endpoint parsing to drivers/of

2014-03-04 Thread Tomi Valkeinen
On 04/03/14 13:36, Philipp Zabel wrote:
 Hi Tomi,
 
 Am Dienstag, den 04.03.2014, 10:58 +0200 schrieb Tomi Valkeinen:
 [...]
 +int of_graph_parse_endpoint(const struct device_node *node,
 +   struct of_endpoint *endpoint)
 +{
 +   struct device_node *port_node = of_get_parent(node);

 Can port_node be NULL? Probably only if something is quite wrong, but
 maybe it's safer to return error in that case.
 
 both of_property_read_u32 and of_node_put can handle port_node == NULL.
 I'll add a WARN_ONCE here as for of_graph_get_next_endpoint and continue
 on.

Isn't it better to return an error?

 And generally speaking, if struct of_endpoint is needed, maybe it would
 be better to return the struct of_endpoint when iterating the ports and
 endpoints. That way there's no need to do parsing afterwards, trying
 to figure out if there's a parent port node, but the information is
 already at hand.
 
 I'd like to keep the iteration separate from parsing so we can
 eventually introduce a for_each_endpoint_of_node helper macro around
 of_graph_get_next_endpoint.
 
 [...]
 A few thoughts about the iteration, and the API in general.

 In the omapdss version I separated iterating ports and endpoints, for
 the two reasons:

 1) I think there are cases where you may want to have properties in the
 port node, for things that are common for all the port's endpoints.

 2) if there are multiple ports, I think the driver code is cleaner if
 you first take the port, decide what port that is and maybe call a
 sub-function, and then iterate the endpoints for that port only.
 
 It depends a bit on whether you are actually iterating over individual
 ports, or if you are just walking the whole endpoint graph to find
 remote devices that have to be added to the component master's waiting
 list, for example.

True, but the latter is easily implemented using the separate
port/endpoint iteration. So I see it as a more powerful API.

 Both of those are possible with the API in the series, but not very cleanly.

 Also, if you just want to iterate the endpoints, it's easy to implement
 a helper using the separate port and endpoint iterations.
 
 I started out to move an existing (albeit lightly used) API to a common
 place so others can use it and improve upon it, too. I'm happy to pile
 on fixes directly in this series, but could we separate the improvement
 step from the move, for the bigger modifications?

Yes, I understand that. What I wonder is that which is easier: make it a
public API now, more or less as it was in v4l2, or make it a public API
only when all the improvements we can think of have been made.

So my fear is that the API is now made public, and you and others start
to use it. But I can't use it, as I need things like separate
port/endpoint iteration. I need to add those, which also means that I
need to change all the users of the API, making the task more difficult
than I'd like.

However, this is more of thinking out loud than I don't like the
series. It's a good series =).

 I had no immediate use for the port iteration, so I have taken no steps
 to add a function for this. I see no problem to add this later when
 somebody needs it, or even rewrite of_graph_get_next_endpoint to use it
 if it is feasible. Iterating over endpoints on a given port needs no
 helper, as you can just use for_each_child_of_node.

I would have a helper, which should do some sanity checks, like that the
node names are endpoint.

 Then, about the get_remote functions: I think there should be only one
 function for that purpose, one that returns the device node that
 contains the remote endpoint.

 My reasoning is that the ports and endpoints, and their contents, should
 be private to the device. So the only use to get the remote is to get
 the actual device, to see if it's been probed, or maybe get some video
 API for that device.
 
 of_graph_get_remote_port currently is used in the exynos4-is/fimc-is.c
 v4l2 driver to get the mipi-csi channel id from the remote port, and
 I've started using it in imx-drm-core.c for two cases:
 - given an endpoint on the encoder, find the remote port connected to
   it, get the associated drm_crtc, to obtain its the drm_crtc_mask
   for encoder-possible_crtcs.
 - given an encoder and a connected drm_crtc, walk all endpoints to find
   the remote port associated with the drm_crtc, and then use the local
   endpoint parent port to determine multiplexer settings.

Ok.

In omapdss each driver handles only the ports and endpoints defined for
its device, and they can be considered private to that device. The only
reason to look for the remote endpoint is to find the remote device. To
me the omapdss model makes sense, and feels logical and sane =). So I
have to say I'm not really familiar with the model you're using.

Of course, the different get_remove_* funcs do no harm, even if we have
a bunch of them. My point was only about enforcing the correct use of
the model, where correct is of course

Re: [PATCH v4 3/3] Documentation: of: Document graph bindings

2014-02-27 Thread Tomi Valkeinen
On 26/02/14 17:47, Philipp Zabel wrote:

 Ok, that looks compact enough. I still don't see the need to change make
 the remote-endpoint property required to achieve this, though. On the
 other hand, I wouldn't object to making it mandatory either.

Sure, having remote-endpoint as required doesn't achieve anything
particular as such. I just feel it's cleaner. If you have an endpoint,
it must point to somewhere. Maybe it makes the code a tiny bit simpler.

If we do already have users for this that do not have the
remote-endpoint, then we're stuck with having it as optional. If we
don't, I'd rather have it as mandatory.

In any case, it's not a very important thing either way.

 Of course, it's up to the developer how his dts looks like. But to me it
 makes sense to require the remote-endpoint property, as the endpoint, or
 even the port, doesn't make much sense if there's nothing to connect to.
 
 Please let's not make it mandatory for a port node to contain an
 endpoint. For any device with multiple ports we can't use the simplified
 form above, and only adding the (correctly numbered) port in all the
 board device trees would be a pain.

That's true. I went with having the ports in the board file, for example
on omap3 the dss has two ports, and N900 board uses the second one:

dss {
status = ok;

pinctrl-names = default;
pinctrl-0 = dss_sdi_pins;

vdds_sdi-supply = vaux1;

ports {
#address-cells = 1;
#size-cells = 0;

port@1 {
reg = 1;

sdi_out: endpoint {
remote-endpoint = lcd_in;
datapairs = 2;
};
};
};
};

Here I guess I could have:

dss {
status = ok;

pinctrl-names = default;
pinctrl-0 = dss_sdi_pins;

vdds_sdi-supply = vaux1;
};

dss_sdi_port {
sdi_out: endpoint {
remote-endpoint = lcd_in;
datapairs = 2;
};
};

But I didn't like that as it splits the pincontrol and regulator supply
from the port/endpoint, which are functionally linked together.

Actually, somewhat aside the subject, I'd like to have the pinctrl and
maybe regulator supply also per endpoint, but I didn't see how that
would be possible with the current framework. If a board would need to
endpoints for the same port, most likely it would also need to different
sets of pinctrls.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

2014-02-27 Thread Tomi Valkeinen
On 26/02/14 16:48, Philipp Zabel wrote:

 I would like the document to acknowledge the difference from the
 phandle+args pattern used elsewhere and a description of when it would
 be appropriate to use this instead of a simpler binding.
 
 Alright. The main point of this binding is that the devices may have
 multiple distinct ports that each can be connected to other devices.

The other main point with this binding are multiple endpoints per port.
So you can have, say, a display controller, with single port, which has
two endpoints going to two separate LCD panels.

In physical level that would usually mean that the same pins from the
display controller are connected to two panels. Most likely this would
mean that only one panel can be used at a time, possibly with different
settings (say, 16 RGB pins for one panel, 24 RGB pins for the other).

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH v4 3/3] Documentation: of: Document graph bindings

2014-02-27 Thread Tomi Valkeinen
On 27/02/14 12:52, Philipp Zabel wrote:

 This is a bit verbose, and if your output port is on an encoder device
 with multiple inputs, the correct port number would become a bit
 unintuitive. For example, we'd have to use port@4 as the output encoder
 units that have a 4-port input multiplexer and port@1 for those that
 don't.

Hmm, sorry, I don't follow...

The port numbers should be fixed for a particular device. If the device
has 4 input ports, the output port would always be port@4, no matter how
many of the input ports are actually used.

I don't have anything against having the ports described in the SoC
dtsi. But I do think it may make it a bit unclear that the ports are
from the same device, and share things like pinmuxing. Both approaches
should work fine, afaics.

However, if, instead, we could have the pinmuxing and other relevant
information in the port or endpoint nodes, making the ports independent
of each other and of the device behind them, I argument above would
disappear.

Also, while I'm all for making the dts files clear, I do think the one
writing the dts still needs to go carefully through the binding docs.
Say, a device may need a gpio list with a bunch of gpios. The developer
just needs to read the docs and know that gpio #3 on the list is GPIO_XYZ.

So I don't see it as a major problem that the board developer needs to
know that port@1 on OMAP3's DSS is SDI output.

 Here I guess I could have:

 dss {
  status = ok;

  pinctrl-names = default;
  pinctrl-0 = dss_sdi_pins;

  vdds_sdi-supply = vaux1;
 };
 
 What is supplied by this regulator. Is it the PHY?

Yes.

 Actually, somewhat aside the subject, I'd like to have the pinctrl and
 maybe regulator supply also per endpoint, but I didn't see how that
 would be possible with the current framework. If a board would need to
 endpoints for the same port, most likely it would also need to different
 sets of pinctrls.
 
 I have a usecase for this the other way around. The i.MX6 DISP0 parallel
 display pads can be connected to two different display controllers via
 multiplexers in the pin control block.
 
 parallel-display {
   compatible = fsl,imx-parallel-display;
   #address-cells = 1;
   #size-cells = 0;
 
   port@0 {
   endpoint {
   remote-endpoint = ipu1_di0;
   };
   };
 
   port@1 {
   endpoint {
   remote-endpoint = ipu2_di0;
   };
   };
 
   disp0: port@2 {
   endpoint {
   pinctrl-names = 0, 1;
   pinctrl-0 = pinctrl_disp0_ipu1;
   pinctrl-1 = pinctrl_disp0_ipu2;
   remote-endpoint = lcd_in;
   };
   }
 };
 
 Here, depending on the active input port, the corresponding pin control
 on the output port could be set. This is probably quite driver specific,
 so I don't see yet how the framework should help with this. In any case,
 maybe this is a bit out of scope for the generic graph bindings.

Hmm, why wouldn't you have the pinctrl definitions in the ports 0 and 1,
then, if it's about selecting the active input pins?

I think the pinctrl framework could offer ways to have pinctrl
definitions anywhere in the DT structure. It'd be up to the driver to
point to the pinctrl in the DT, ask the framework to parse them, and
eventually enable/disable the pins.

But yes, it's a bit out of scope =).

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH v4 3/3] Documentation: of: Document graph bindings

2014-02-26 Thread Tomi Valkeinen
On 25/02/14 16:58, Philipp Zabel wrote:

 +Optional endpoint properties
 +
 +
 +- remote-endpoint: phandle to an 'endpoint' subnode of a remote device node.

Why is that optional? What use is an endpoint, if it's not connected to
something?

Also, if this is being worked on, I'd like to propose the addition of
simpler single-endpoint cases which I've been using with OMAP DSS. So if
there's just a single endpoint for the device, which is very common, you
can have just:

device {
...
endpoint { ... };
};

However, I guess that the patch just keeps growing and growing, so maybe
it's better to add such things later =).

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH v4 3/3] Documentation: of: Document graph bindings

2014-02-26 Thread Tomi Valkeinen
On 26/02/14 16:57, Philipp Zabel wrote:
 Hi Tomi,
 
 Am Mittwoch, den 26.02.2014, 15:14 +0200 schrieb Tomi Valkeinen:
 On 25/02/14 16:58, Philipp Zabel wrote:

 +Optional endpoint properties
 +
 +
 +- remote-endpoint: phandle to an 'endpoint' subnode of a remote device 
 node.

 Why is that optional? What use is an endpoint, if it's not connected to
 something?
 
 This allows to include the an empty endpoint template in a SoC dtsi for
 the convenience of board dts writers. Also, the same property is
 currently listed as optional in video-interfaces.txt.
 
   soc.dtsi:
   display-controller {
   port {
   disp0: endpoint { };
   };
   };
 
   board.dts:
   #include soc.dtsi
   disp0 {
   remote-endpoint = panel_input;
   };
   panel {
   port {
   panel_in: endpoint {
   remote-endpoint = disp0;
   };
   };
   };
 
 Any board not using that port can just leave the endpoint disconnected.

Hmm I see. I'm against that.

I think the SoC dtsi should not contain endpoint node, or even port node
(at least usually). It doesn't know how many endpoints, if any, a
particular board has. That part should be up to the board dts.

I've done this with OMAP as (much simplified):

SoC.dtsi:

dss: dss@5800 {
status = disabled;
};

Nothing else (relevant here). The binding documentation states that dss
has one port, and information what data is needed for the port and endpoint.

board.dts:

dss {
status = ok;

pinctrl-names = default;
pinctrl-0 = dss_dpi_pins;

dpi_out: endpoint {

remote-endpoint = tfp410_in;
data-lines = 24;
};
};

That's using the shortened version without port node.

Of course, it's up to the developer how his dts looks like. But to me it
makes sense to require the remote-endpoint property, as the endpoint, or
even the port, doesn't make much sense if there's nothing to connect to.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH v2] [media] of: move graph helpers from drivers/media/v4l2-core to drivers/media

2014-02-11 Thread Tomi Valkeinen
Hi,

On 11/02/14 23:41, Philipp Zabel wrote:
 From: Philipp Zabel philipp.za...@gmail.com
 
 This patch moves the parsing helpers used to parse connected graphs
 in the device tree, like the video interface bindings documented in
 Documentation/devicetree/bindings/media/video-interfaces.txt, from
 drivers/media/v4l2-core to drivers/media.
 
 This allows to reuse the same parser code from outside the V4L2
 framework, most importantly from display drivers.
 The functions v4l2_of_get_next_endpoint, v4l2_of_get_remote_port,
 and v4l2_of_get_remote_port_parent are moved. They are renamed to
 of_graph_get_next_endpoint, of_graph_get_remote_port, and
 of_graph_get_remote_port_parent, respectively.
 Since there are not that many current users, switch all of them
 to the new functions right away.
 
 Signed-off-by: Philipp Zabel p.za...@pengutronix.de
 Acked-by: Mauro Carvalho Chehab m.che...@samsung.com
 Acked-by: Guennadi Liakhovetski g.liakhovet...@gmx.de

I don't think the graphs or the parsing code has anything video
specific. It could well be used for anything, whenever there's need to
describe connections between devices which are not handled by the normal
child-parent relationships. So the code could well reside in some
generic place, in my opinion.

Also, I have no problem with having it in drivers/media, but
drivers/video should work also. We already have other, generic, video
related things there like hdmi infoframes and display timings.

And last, it's fine to move the funcs as-is in the first place, but I
think they should be improved a bit before non-v4l2 users use them.
There are a couple of things I tried to accomplish with the omapdss
specific versions in
https://www.mail-archive.com/linux-omap@vger.kernel.org/msg100761.html:

- Iterating ports and endpoints separately. If a node has multiple
ports, I would think that the driver needs to know which port and
endpoint combination is the current one during iteration. It's not
enough to just get the endpoint.

- Simplify cases when there's just one port and one endpoint, in which
case the port node can be omitted from the DT data.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH] omap_vout: Add DVI display type support

2014-02-10 Thread Tomi Valkeinen
On 08/02/14 16:32, Laurent Pinchart wrote:
 Since the introduction of the new OMAP DSS DVI connector driver in
 commit 348077b154357eec595068a3336ef6beb870e6f3 (OMAPDSS: Add new DVI
 Connector driver), DVI outputs report a new display type of
 OMAP_DISPLAY_TYPE_DVI instead of OMAP_DISPLAY_TYPE_DPI. Handle the new
 type in the IRQ handler.
 
 Signed-off-by: Laurent Pinchart laurent.pinch...@ideasonboard.com
 ---
  drivers/media/platform/omap/omap_vout.c | 1 +
  1 file changed, 1 insertion(+)
 
 diff --git a/drivers/media/platform/omap/omap_vout.c 
 b/drivers/media/platform/omap/omap_vout.c
 index dfd0a21..9a726ea 100644
 --- a/drivers/media/platform/omap/omap_vout.c
 +++ b/drivers/media/platform/omap/omap_vout.c
 @@ -601,6 +601,7 @@ static void omap_vout_isr(void *arg, unsigned int 
 irqstatus)
   switch (cur_display-type) {
   case OMAP_DISPLAY_TYPE_DSI:
   case OMAP_DISPLAY_TYPE_DPI:
 + case OMAP_DISPLAY_TYPE_DVI:
   if (mgr_id == OMAP_DSS_CHANNEL_LCD)
   irq = DISPC_IRQ_VSYNC;
   else if (mgr_id == OMAP_DSS_CHANNEL_LCD2)
 

Acked-by: Tomi Valkeinen tomi.valkei...@ti.com

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFR 2/2] drm/panel: Add simple panel support

2013-10-24 Thread Tomi Valkeinen
On 24/10/13 13:40, Laurent Pinchart wrote:

 panel {
  remote = remote-endpoint;
  common-video-property = asd;
 };

 panel {
  port {
  endpoint {
  remote = remote-endpoint;
  common-video-property = asd;
  };
  };
 };
 
 Please note that the common video properties would be in the panel node, not 
 in the endpoint node (unless you have specific requirements to do so, which 
 isn't the common case).

Hmm, well, the panel driver must look for its properties either in the
panel node, or in the endpoint node (I guess it could look them from
both, but that doesn't sound good).

If you write the panel driver, and in all your cases the properties work
fine in the panel node, does that mean they'll work fine with everybody?

I guess there are different kinds of properties. Something like a
regulator is obviously property of the panel. But anything related to
the video itself, like DPI's bus width, or perhaps even something like
orientation if the panel supports such, could need to be in the
endpoint node.

But yes, I understand what you mean. With common-video-property I
meant common properties like DPI bus width.

 If that can be supported in the SW by adding complexity to a few functions,
 and it covers practically all the panels, isn't it worth it?

 Note that I have not tried this, so I don't know if there are issues.
 It's just a thought. Even if there's need for a endpoint node, perhaps
 the port node can be made optional.
 
 It can be worth it, as long as we make sure that simplified bindings cover 
 the 
 needs of the generic code.
 
 We could assume that, if the port subnode isn't present, the device will have 
 a single port, with a single endpoint. However, isn't the number of endpoints 

Right.

 a system property rather than a device property ? If a port of a device is 

Yes.

 connected to two remote ports it will require two endpoints. We could select 
 the simplified or full bindings based on the system topology though.

The drivers should not know about simplified/normal bindings. They
should use CDF DT helper functions to get the port and endpoint
information. The helper functions would do the assuming.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH/RFC v3 00/19] Common Display Framework

2013-10-17 Thread Tomi Valkeinen
On 17/10/13 10:48, Andrzej Hajda wrote:

 The main function of DSI is to transport pixels from one IP to another IP
 and this function IMO should not be modeled by display entity.
 Power, clocks, etc will be performed via control bus according to
 panel demands.
 If 'DSI chip' has additional functions for video processing they can
 be modeled by CDF entity if it makes sense.

Now I don't follow. What do you mean with display entity and with CDF
entity? Are they the same?

Let me try to clarify my point:

On OMAP SoC we have a DSI encoder, which takes input from the display
controller in parallel RGB format, and outputs DSI.

Then there are external encoders that take MIPI DPI as input, and output
DSI.

The only difference with the above two components is that the first one
is embedded into the SoC. I see no reason to represent them in different
ways (i.e. as you suggested, not representing the SoC's DSI at all).

Also, if you use DSI burst mode, you will have to have different video
timings in the DSI encoder's input and output. And depending on the
buffering of the DSI encoder, you could have different timings in any case.

Furthermore, both components could have extra processing. I know the
external encoders sometimes do have features like scaling.

 We still have two different endpoint configurations for the same
 DSI-master port. If that configuration is in the DSI-master's port node,
 not inside an endpoint data, then that can't be supported.
 I am not sure if I understand it correctly. But it seems quite simple:
 when panel starts/resumes it request DSI (via control bus) to fulfill
 its configuration settings.
 Of course there are some settings which are not panel dependent and those
 should reside in DSI node.

Exactly. And when the two panels require different non-panel-dependent
settings, how do you represent them in the DT data?

 We say then: callee handles locking :)
 Sure, but my point was that the caller handling the locking is much
 simpler than the callee handling locking. And the latter causes
 atomicity issues, as the other API could be invoked in between two calls
 for the first API.

 
 Could you describe such scenario?

If we have two independent APIs, ctrl and video, that affect the same
underlying hardware, the DSI bus, we could have a scenario like this:

thread 1:

ctrl-op_foo();
ctrl-op_bar();

thread 2:

video-op_baz();

Even if all those ops do locking properly internally, the fact that
op_baz() can be called in between op_foo() and op_bar() may cause problems.

To avoid that issue with two APIs we'd need something like:

thread 1:

ctrl-lock();
ctrl-op_foo();
ctrl-op_bar();
ctrl-unlock();

thread 2:

video-lock();
video-op_baz();
video-unlock();

 Platform devices
 
 Platform devices are devices that typically appear as autonomous
 entities in the system. This includes legacy port-based devices and
 host bridges to peripheral buses, and most controllers integrated
 into system-on-chip platforms.  What they usually have in common
 is direct addressing from a CPU bus.  Rarely, a platform_device will
 be connected through a segment of some other kind of bus; but its
 registers will still be directly addressable.
 Yep, typically and rarely =). I agree, it's not clear. I think there
 are things with DBI/DSI that clearly point to a platform device, but
 also the other way.
 Just to be sure, we are talking here about DSI-slaves, ie. for example
 about panels,
 where direct accessing from CPU bus usually is not possible.

Yes. My point is that with DBI/DSI there's not much bus there (if a
normal bus would be PCI/USB/i2c etc), it's just a point to point link
without probing or a clearly specified setup sequence.

If DSI/DBI was used only for control, a linux bus would probably make
sense. But DSI/DBI is mainly a video transport channel, with the
control-part being secondary.

And when considering that the video and control data are sent over the
same channel (i.e. there's no separate, independent ctrl channel), and
the strict timing restrictions with video, my gut feeling is just that
all the extra complexity brought with separating the control to a
separate bus is not worth it.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH/RFC v3 00/19] Common Display Framework

2013-10-17 Thread Tomi Valkeinen
On 17/10/13 15:26, Andrzej Hajda wrote:

 I am not sure what exactly the encoder performs, if this is only image
 transport from dispc to panel CDF pipeline in both cases should look like:
 dispc  panel
 The only difference is that panels will be connected via different Linux bus
 adapters, but it will be irrelevant to CDF itself. In this case I would say
 this is DSI-master rather than encoder, or at least that the only
 function of the
 encoder is DSI.

Yes, as I said, it's up to the driver writer how he wants to use CDF. If
he doesn't see the point of representing the SoC's DSI encoder as a
separate CDF entity, nobody forces him to do that.

On OMAP, we have single DISPC with multiple parallel outputs, and a
bunch of encoder IPs (MIPI DPI, DSI, DBI, etc). Each encoder IP can be
connected to some of the DISPC's output. In this case, even if the DSI
encoder does nothing special, I see it much better to represent the DSI
encoder as a CDF entity so that the links between DISPC, DSI, and the
DSI peripherals are all there.

 If display_timings on input and output differs, I suppose it should be
 modeled
 as display_entity, as this is an additional functionality(not covered by
 DSI standard AFAIK).

Well, DSI standard is about the DSI output. Not about the encoder's
input, or the internal operation of the encoder.

 Of course there are some settings which are not panel dependent and those
 should reside in DSI node.
 Exactly. And when the two panels require different non-panel-dependent
 settings, how do you represent them in the DT data?
 
 non-panel-dependent setting cannot depend on panel, by definition :)

With non-panel-dependent setting I meant something that is a property
of the DSI master device, but still needs to be configured differently
for each panel.

Say, pin configuration. When using panel A, the first pin of the DSI
block could be clock+. With panel B, the first pin could be clock-. This
configuration is about DSI master, but it is different for each panel.

If we have separate endpoint in the DSI master for each panel, this data
can be there. If we don't have the endpoint, as is the case with
separate control bus, where is that data?

 Could you describe such scenario?
 If we have two independent APIs, ctrl and video, that affect the same
 underlying hardware, the DSI bus, we could have a scenario like this:

 thread 1:

 ctrl-op_foo();
 ctrl-op_bar();

 thread 2:

 video-op_baz();

 Even if all those ops do locking properly internally, the fact that
 op_baz() can be called in between op_foo() and op_bar() may cause problems.

 To avoid that issue with two APIs we'd need something like:

 thread 1:

 ctrl-lock();
 ctrl-op_foo();
 ctrl-op_bar();
 ctrl-unlock();

 thread 2:

 video-lock();
 video-op_baz();
 video-unlock();
 I should mention I was asking for real hw/drivers configuration.
 I do not know what do you mean with video-op_baz() ?
 DSI-master is not modeled in CDF, and only CDF provides video
 operations.

It was just an example of the additional complexity with regarding
locking when using two APIs.

The point is that if the panel driver has two pointers (i.e. API), one
for the control bus, one for the video bus, and ops on both buses affect
the same hardware, the locking is not easy.

If, on the other hand, the panel driver only has one API to use, it's
simple to require the caller to handle any locking.

 I guess one scenario, when two panels are connected to single DSI-master.
 In such case both can call DSI ops, but I do not know how do you want to
 prevent it in case of your CDF-T implementation.

No, that was not the case I was describing. This was about a single panel.

If we have two independent APIs, we need to define how locking is
managed for those APIs. Even if in practice both APIs are used by the
same driver, and the driver can manage the locking, that's not really a
valid requirement. It'd be almost the same as requiring that gpio API
cannot be called at the same time as i2c API.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH/RFC v3 00/19] Common Display Framework

2013-10-11 Thread Tomi Valkeinen
On 09/10/13 17:08, Andrzej Hajda wrote:

 As I have adopted existing internal driver for MIPI-DSI bus, I did not
 take too much
 care for DT. You are right, 'bta-timeout' is a configuration parameter
 (however its
 minimal value is determined by characteristic of the DSI-slave). On the
 other
 side currently there is no good place for such configuration parameters
 AFAIK.

The minimum bta-timeout should be deducable from the DSI bus speed,
shouldn't it? Thus there's no need to define it anywhere.

 - enable_hs and enable_te, used to enable/disable HS mode and
 tearing-elimination
 
 It seems there should be a way to synchronize TE signal with panel,
 in case signal is provided only to dsi-master. Some callback I suppose?
 Or transfer synchronization should be done by dsi-master.

Hmm, can you explain a bit what you mean?

Do you mean that the panel driver should get a callback when DSI TE
trigger happens?

On OMAP, when using DSI TE trigger, the dsi-master does it all. So the
panel driver just calls update() on the dsi-master, and then the
dsi-master will wait for TE, and then start the transfer. There's also a
callback to the panel driver when the transfer has completed.

 - set_max_rx_packet_size, used to configure the max rx packet size.
 Similar callbacks should be added to mipi-dsi-bus ops as well, to
 make it complete/generic.

Do you mean the same calls should exist both in the mipi-dbi-bus ops and
on the video ops? If they are called with different values, which one
wins?

 http://article.gmane.org/gmane.comp.video.dri.devel/90651
 http://article.gmane.org/gmane.comp.video.dri.devel/91269
 http://article.gmane.org/gmane.comp.video.dri.devel/91272

 I still think that it's best to consider DSI and DBI as a video bus (not
 as a separate video bus and a control bus), and provide the packet
 transfer methods as part of the video ops.
 I have read all posts regarding this issue and currently I tend
 to solution where CDF is used to model only video streams,
 with control bus implemented in different framework.
 The only concerns I have if we should use Linux bus for that.

Ok. I have many other concerns, as I've expressed in the mails =). I
still don't see how it could work. So I'd very much like to see a more
detailed explanation how the separate control  video bus approach would
deal with different scenarios.

Let's consider a DSI-to-HDMI encoder chip. Version A of the chip is
controlled via DSI, version B is controlled via i2c. As the output of
the chip goes to HDMI connector, the DSI bus speed needs to be set
according to the resolution of the HDMI monitor.

So, with version A, the encoder driver would have some kind of pointers
to ctrl_ops and video_ops (or, pointers to dsi_bus instance and
video_bus instance), right? The ctrl_ops would need to have ops like
set_bus_speed, enable_hs, etc, to configure the DSI bus.

When the encoder driver is started, it'd probably set some safe bus
speed, configure the encoder a bit, read the EDID, enable HS,
re-configure the bus speed to match the monitor's video mode, configure
the encoder, and at last enable the video stream.

Version B would have i2c_client and video_ops. When the driver starts,
it'd  probably do the same things as above, except the control messages
would go through i2c. That means that setting the bus speed, enabling
HS, etc, would happen through video_ops, as the i2c side has no
knowledge of the DSI side, right? Would there be identical ops on both
DSI ctrl and video ops?

That sounds very bad. What am I missing here? How would it work?

And, if we want to separate the video and control, I see no reason to
explicitly require the video side to be present. I.e. we could as well
have a DSI peripheral that has only the control bus used. How would that
reflect to, say, the DT presentation? Say, if we have a version A of the
encoder, we could have DT data like this (just a rough example):

soc-dsi {
encoder {
input: endpoint {
remote-endpoint = soc-dsi-ep;
/* configuration for the DSI lanes */
dsi-lanes = 0 1 2 3 4 5;
};
};
};

So the encoder would be places inside the SoC's DSI node, similar to how
an i2c device would be placed inside SoC's i2c node. DSI configuration
would be inside the video endpoint data.

Version B would be almost the same:

i2c0 {
encoder {
input: endpoint {
remote-endpoint = soc-dsi-ep;
/* configuration for the DSI lanes */
dsi-lanes = 0 1 2 3 4 5;
};
};
};

Now, how would the video-bus-less device be defined? It'd be inside the
soc-dsi node, that's clear. Where would the DSI lane configuration be?
Not inside 'endpoint' node, as that's for video and wouldn't exist in
this case. Would we have the same lane configuration in two places, once
for video and once for control?

I agree that 

Re: [PATCH/RFC v3 00/19] Common Display Framework

2013-10-11 Thread Tomi Valkeinen
On 11/10/13 14:19, Andrzej Hajda wrote:
 On 10/11/2013 08:37 AM, Tomi Valkeinen wrote:

 The minimum bta-timeout should be deducable from the DSI bus speed,
 shouldn't it? Thus there's no need to define it anywhere.
 Hmm, specification says This specified period shall be longer then
 the maximum possible turnaround delay for the unit to which the
 turnaround request was sent.

Ah, you're right. We can't know how long the peripheral will take
responding. I was thinking of something that only depends on the
bus-speed and the timings for that.

 If I undrestand correctly you think about CDF topology like below:
 
 DispContr(SoC) --- DSI-master(SoC) --- encoder(DSI or I2C)
 
 But I think with mipi-dsi-bus topology could look like:
 
 DispContr(SoC) --- encoder(DSI or I2C)
 
 DSI-master will not have its own entity, in the graph it could be
 represented
 by the link(---), as it really does not process the video, only
 transports it.

At least in OMAP, the SoC's DSI-master receives parallel RGB data from
DISPC, and encodes it to DSI. Isn't that processing? It's basically a
DPI-to-DSI encoder. And it's not a simple pass-through, the DSI video
timings could be considerably different than the DPI timings.

 In case of version A I think everything is clear.
 In case of version B it does not seems so nice at the first sight, but
 still seems quite straightforward to me - special plink in encoder's
 node pointing
 to DSI-master, driver will find the device in runtime and use ops as needed
 (additional ops/helpers required).
 This is also the way to support devices which can be controlled by DSI
 and I2C
 in the same time. Anyway I suspect such scenario will be quite rare.

Okay, so if I gather it right, you say there would be something like
'dsi_adapter' (like i2c_adapter), which represents the dsi-master. And a
driver could get pointer to this, regardless of whether it the linux
device is a DSI device.

At least one issue with this approach is the endpoint problem (see below).

 And, if we want to separate the video and control, I see no reason to
 explicitly require the video side to be present. I.e. we could as well
 have a DSI peripheral that has only the control bus used. How would that
 reflect to, say, the DT presentation? Say, if we have a version A of the
 encoder, we could have DT data like this (just a rough example):

 soc-dsi {
  encoder {
  input: endpoint {
  remote-endpoint = soc-dsi-ep;
 Here I would replace soc-dsi-ep by phandle to display controller/crtc/
 
  /* configuration for the DSI lanes */
  dsi-lanes = 0 1 2 3 4 5;
 Wow, quite advanced DSI.

Wha? That just means there is one clock lane and two datalanes, nothing
more =). We can select the polarity of a lane, so we describe both the
positive and negative lines there. So it says clk- is connected to pin
0, clk+ connected to pin 1, etc.

  };
  };
 };

 So the encoder would be places inside the SoC's DSI node, similar to how
 an i2c device would be placed inside SoC's i2c node. DSI configuration
 would be inside the video endpoint data.

 Version B would be almost the same:

 i2c0 {
  encoder {
  input: endpoint {
  remote-endpoint = soc-dsi-ep;
 soc-dsi-ep = disp-ctrl-ep
  /* configuration for the DSI lanes */
  dsi-lanes = 0 1 2 3 4 5;
  };
  };
 };

 Now, how would the video-bus-less device be defined?
 It'd be inside the
 soc-dsi node, that's clear. Where would the DSI lane configuration be?
 Not inside 'endpoint' node, as that's for video and wouldn't exist in
 this case. Would we have the same lane configuration in two places, once
 for video and once for control?
 I think it is control setting, so it should be put outside endpoint node.
 Probably it could be placed in encoder node.

Well, one point of the endpoints is also to allow switching of video
devices.

For example, I could have a board with a SoC's DSI output, connected to
two DSI panels. There would be some kind of mux between, so that I can
select which of the panels is actually connected to the SoC.

Here the first panel could use 2 datalanes, the second one 4. Thus, the
DSI master would have two endpoints, the other one using 2 and the other
4 datalanes.

If we decide that kind of support is not needed, well, is there even
need for the V4L2 endpoints in the DT data at all?

 I agree that having DSI/DBI control and video separated would be
 elegant. But I'd like to hear what is the technical benefit of that? At
 least to me it's clearly more complex to separate them than to keep them
 together (to the extent that I don't yet see how it is even possible),
 so there must be a good reason for the separation. I don't understand
 that reason. What is it?
 Roughly speaking it is a question where is the more convenient place to
 put bunch
 of opses, technically both solutions can be somehow

Re: [PATCH/RFC v3 00/19] Common Display Framework

2013-10-11 Thread Tomi Valkeinen
On 11/10/13 17:16, Andrzej Hajda wrote:

 Picture size, content and format is the same on input and on output of DSI.
 The same bits which enters DSI appears on the output. Internally bits
 order can
 be different but practically you are configuring DSI master and slave
 with the same format.
 
 If you create DSI entity you will have to always set the same format and
 size on DSI input, DSI output and encoder input.
 If you skip creating DSI entity you loose nothing, and you do not need
 to take care of it.

Well, this is really a different question from the bus problem. But
nothing says the DSI master cannot change the format or even size. For
sure it can change the video timings. The DSI master could even take two
parallel inputs, and combine them into one DSI output. You don't can't
what all the possible pieces of hardware do =).

If you have a bigger IP block that internally contains the DISPC and the
DSI, then, yes, you can combine them into one display entity. I don't
think that's correct, though. And if the DISPC and DSI are independent
blocks, then especially I think there must be an entity for the DSI
block, which will enable the powers, clocks, etc, when needed.

 Well, one point of the endpoints is also to allow switching of video
 devices.

 For example, I could have a board with a SoC's DSI output, connected to
 two DSI panels. There would be some kind of mux between, so that I can
 select which of the panels is actually connected to the SoC.

 Here the first panel could use 2 datalanes, the second one 4. Thus, the
 DSI master would have two endpoints, the other one using 2 and the other
 4 datalanes.

 If we decide that kind of support is not needed, well, is there even
 need for the V4L2 endpoints in the DT data at all?
 Hmm, both panels connected to one endpoint of dispc ?
 The problem I see is which driver should handle panel switching,
 but this is question about hardware design as well. If this is realized
 by dispc I have told already the solution. If this is realized by other
 device I do not see a problem to create corresponding CDF entity,
 or maybe it can be handled by Pipeline Controller ???

Well the switching could be automatic, when the panel power is enabled,
the DSI mux is switched for that panel. It's not relevant.

We still have two different endpoint configurations for the same
DSI-master port. If that configuration is in the DSI-master's port node,
not inside an endpoint data, then that can't be supported.

 I agree that having DSI/DBI control and video separated would be
 elegant. But I'd like to hear what is the technical benefit of that? At
 least to me it's clearly more complex to separate them than to keep them
 together (to the extent that I don't yet see how it is even possible),
 so there must be a good reason for the separation. I don't understand
 that reason. What is it?
 Roughly speaking it is a question where is the more convenient place to
 put bunch
 of opses, technically both solutions can be somehow implemented.
 Well, it's also about dividing a single physical bus into two separate
 interfaces to it. It sounds to me that it would be much more complex
 with locking. With a single API, we can just say the caller handles
 locking. With two separate interfaces, there must be locking at the
 lower level.
 We say then: callee handles locking :)

Sure, but my point was that the caller handling the locking is much
simpler than the callee handling locking. And the latter causes
atomicity issues, as the other API could be invoked in between two calls
for the first API.

But note that I'm not saying we should not implement bus model just
because it's more complex. We should go for bus model if it's better. I
just want to bring up these complexities, which I feel are quite more
difficult than with the simpler model.

 Pros of mipi bus:
 - no fake entity in CDF, with fake opses, I have to use similar entities
 in MIPI-CSI
 camera pipelines and it complicates life without any benefit(at least
 from user side),
 You mean the DSI-master? I don't see how it's fake, it's a video
 processing unit that has to be configured. Even if we forget the control
 side, and just think about plain video stream with DSI video mode,
 there's are things to configure with it.

 What kind of issues you have in the CSI side, then?
 Not real issues, just needless calls to configure CSI entity pads,
 with the same format and picture sizes as in camera.

Well, the output of a component A is surely the same as the input of
component B, if B receives the data from A. So that does sound useless.
I don't do that kind of calls in my model.

 - CDF models only video buses, control bus is a domain of Linux buses,
 Yes, but in this case the buses are the same. It makes me a bit nervous
 to have two separate ways (video and control) to use the same bus, in a
 case like video where timing is critical.

 So yes, we can consider video and control buses as virtual buses, and
 the actual transport is the 

Re: [PATCH V5 4/5] video: exynos_mipi_dsim: Use the generic PHY driver

2013-10-09 Thread Tomi Valkeinen
On 28/09/13 22:27, Sylwester Nawrocki wrote:
 Use the generic PHY API instead of the platform callback
 for the MIPI DSIM DPHY enable/reset control.
 
 Signed-off-by: Sylwester Nawrocki s.nawro...@samsung.com
 Signed-off-by: Kyungmin Park kyungmin.p...@samsung.com
 Acked-by: Felipe Balbi ba...@ti.com
 Acked-by: Donghwa Lee dh09@samsung.com
 ---
 Changes since v4:
  - PHY label removed from the platform data structure.
 ---
  drivers/video/exynos/Kconfig   |1 +
  drivers/video/exynos/exynos_mipi_dsi.c |   19 ++-
  include/video/exynos_mipi_dsim.h   |5 ++---
  3 files changed, 13 insertions(+), 12 deletions(-)

Acked-by: Tomi Valkeinen tomi.valkei...@ti.com

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH/RFC v3 00/19] Common Display Framework

2013-10-02 Thread Tomi Valkeinen
Hi Andrzej,

On 02/10/13 15:23, Andrzej Hajda wrote:

 Using Linux buses for DBI/DSI
 =

 I still don't see how it would work. I've covered this multiple times in
 previous posts so I'm not going into more details now.

 I implemented DSI (just command mode for now) as a video bus but with bunch 
 of
 extra ops for sending the control messages.
 
 Could you post the list of ops you have to create.

I'd rather not post the ops I have in my prototype, as it's still a
total hack. However, they are very much based on the current OMAP DSS's
ops, so I'll describe them below. I hope I find time to polish my CDF
hacks more, so that I can publish them.

 I have posted some time ago my implementation of DSI bus:
 http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure/69358/focus=69362

A note about the DT data on your series, as I've been stuggling to
figure out the DT data for OMAP: some of the DT properties look like
configuration, not hardware description. For example,
samsung,bta-timeout doesn't describe hardware.

 I needed three quite generic ops to make it working:
 - set_power(on/off),
 - set_stream(on/off),
 - transfer(dsi_transaction_type, tx_buf, tx_len, rx_buf, rx_len)
 I have recently replaced set_power by PM_RUNTIME callbacks,
 but I had to add .initialize ops.

We have a bit more on omap:

http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/include/video/omapdss.h#n648

Some of those should be removed and some should be omap DSI's internal
matters, not part of the API. But it gives an idea of the ops we use.
Shortly about the ops:

- (dis)connect, which might be similar to your initialize. connect is
meant to connect the pipeline, reserving the video ports used, etc.

- enable/disable, enable the DSI bus. If the DSI peripheral requires a
continous DSI clock, it's also started at this point.

- set_config configures the DSI bus (like, command/video mode, etc.).

- configure_pins can be ignored, I think that function is not needed.

- enable_hs and enable_te, used to enable/disable HS mode and
tearing-elimination

- update, which does a single frame transfer

- bus_lock/unlock can be ignored

- enable_video_output starts the video stream, when using DSI video mode

- the request_vc, set_vc_id, release_vc can be ignored

- Bunch of transfer funcs. Perhaps a single func could be used, as you
do. We have sync write funcs, which do a BTA at the end of the write and
wait for reply, and nosync version, which just pushes the packet to the
TX buffers.

- bta_sync, which sends a BTA and waits for the peripheral to reply

- set_max_rx_packet_size, used to configure the max rx packet size.

 Regarding the discussion how and where to implement control bus I have
 though about different alternatives:
 1. Implement DSI-master as a parent dev which will create DSI-slave
 platform dev in a similar way as for MFD devices (ssbi.c seems to me a
 good example).
 2. Create universal mipi-display-bus which will cover DSI, DBI and
 possibly other buses - they have have few common things - for example
 MIPI-DCS commands.
 
 I am not really convinced to either solution all have some advantages
 and disadvantages.

I think a dedicated DSI bus and your alternatives all have the same
issues with splitting the DSI control into two. I've shared some of my
thoughts here:

http://article.gmane.org/gmane.comp.video.dri.devel/90651
http://article.gmane.org/gmane.comp.video.dri.devel/91269
http://article.gmane.org/gmane.comp.video.dri.devel/91272

I still think that it's best to consider DSI and DBI as a video bus (not
as a separate video bus and a control bus), and provide the packet
transfer methods as part of the video ops.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH 1/6] v4l: ti-vpe: Create a vpdma helper library

2013-08-05 Thread Tomi Valkeinen
Hi,

On 02/08/13 17:03, Archit Taneja wrote:

 +struct vpdma_data_format vpdma_yuv_fmts[] = {
 + [VPDMA_DATA_FMT_Y444] = {
 + .data_type  = DATA_TYPE_Y444,
 + .depth  = 8,
 + },

This, and all the other tables, should probably be consts?

 +static void insert_field(u32 *valp, u32 field, u32 mask, int shift)
 +{
 + u32 val = *valp;
 +
 + val = ~(mask  shift);
 + val |= (field  mask)  shift;
 + *valp = val;
 +}

I think insert normally means, well, inserting a thing in between
something. What you do here is overwriting.

Why not just call it write_field?

 + * Allocate a DMA buffer
 + */
 +int vpdma_buf_alloc(struct vpdma_buf *buf, size_t size)
 +{
 + buf-size = size;
 + buf-mapped = 0;

Maybe true/false is clearer here that 0/1.

 +/*
 + * submit a list of DMA descriptors to the VPE VPDMA, do not wait for 
 completion
 + */
 +int vpdma_submit_descs(struct vpdma_data *vpdma, struct vpdma_desc_list 
 *list)
 +{
 + /* we always use the first list */
 + int list_num = 0;
 + int list_size;
 +
 + if (vpdma_list_busy(vpdma, list_num))
 + return -EBUSY;
 +
 + /* 16-byte granularity */
 + list_size = (list-next - list-buf.addr)  4;
 +
 + write_reg(vpdma, VPDMA_LIST_ADDR, (u32) list-buf.dma_addr);
 + wmb();

What is the wmb() for?

 + write_reg(vpdma, VPDMA_LIST_ATTR,
 + (list_num  VPDMA_LIST_NUM_SHFT) |
 + (list-type  VPDMA_LIST_TYPE_SHFT) |
 + list_size);
 +
 + return 0;
 +}

 +static void vpdma_firmware_cb(const struct firmware *f, void *context)
 +{
 + struct vpdma_data *vpdma = context;
 + struct vpdma_buf fw_dma_buf;
 + int i, r;
 +
 + dev_dbg(vpdma-pdev-dev, firmware callback\n);
 +
 + if (!f || !f-data) {
 + dev_err(vpdma-pdev-dev, couldn't get firmware\n);
 + return;
 + }
 +
 + /* already initialized */
 + if (get_field_reg(vpdma, VPDMA_LIST_ATTR, VPDMA_LIST_RDY_MASK,
 + VPDMA_LIST_RDY_SHFT)) {
 + vpdma-ready = true;
 + return;
 + }
 +
 + r = vpdma_buf_alloc(fw_dma_buf, f-size);
 + if (r) {
 + dev_err(vpdma-pdev-dev,
 + failed to allocate dma buffer for firmware\n);
 + goto rel_fw;
 + }
 +
 + memcpy(fw_dma_buf.addr, f-data, f-size);
 +
 + vpdma_buf_map(vpdma, fw_dma_buf);
 +
 + write_reg(vpdma, VPDMA_LIST_ADDR, (u32) fw_dma_buf.dma_addr);
 +
 + for (i = 0; i  100; i++) { /* max 1 second */
 + msleep_interruptible(10);

You call interruptible version here, but you don't handle the
interrupted case. I believe the loop will just continue looping, even if
the user interrupted.

 + if (get_field_reg(vpdma, VPDMA_LIST_ATTR, VPDMA_LIST_RDY_MASK,
 + VPDMA_LIST_RDY_SHFT))
 + break;
 + }
 +
 + if (i == 100) {
 + dev_err(vpdma-pdev-dev, firmware upload failed\n);
 + goto free_buf;
 + }
 +
 + vpdma-ready = true;
 +
 +free_buf:
 + vpdma_buf_unmap(vpdma, fw_dma_buf);
 +
 + vpdma_buf_free(fw_dma_buf);
 +rel_fw:
 + release_firmware(f);
 +}

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH 2/6] v4l: ti-vpe: Add helpers for creating VPDMA descriptors

2013-08-05 Thread Tomi Valkeinen
On 02/08/13 17:03, Archit Taneja wrote:
 Create functions which the VPE driver can use to create a VPDMA descriptor and
 add it to a VPDMA descriptor list. These functions take a pointer to an 
 existing
 list, and append the configuration/data/control descriptor header to the list.
 
 In the case of configuration descriptors, the creation of a payload block may 
 be
 required(the payloads can hold VPE MMR values, or scaler coefficients). The
 allocation of the payload buffer and it's content is left to the VPE driver.
 However, the VPDMA library provides helper macros to create payload in the
 correct format.
 
 Add debug functions to dump the descriptors in a way such that it's easy to 
 see
 the values of different fields in the descriptors.

There are lots of defines and inline functions in this patch. But at
least the ones I looked at were only used once.

For example, dtd_set_xfer_length_height() is called only in one place.
Then dtd_set_xfer_length_height() uses DTD_W1(), and again it's the only
place where DTD_W1() is used.

So instead of:

dtd_set_xfer_length_height(dtd, c_rect-width, height);

You could as well do:

dtd-xfer_length_height = (c_rect-width  DTD_LINE_LENGTH_SHFT) | height;

Now, presuming the compiler optimizes correctly, there should be no
difference between the two options above. My only point is that I wonder
if having multiple layers there improves readability at all. Some
helper funcs are rather trivial, like:

+static inline void dtd_set_w1(struct vpdma_dtd *dtd, u32 value)
+{
+   dtd-w1 = value;
+}

Then there are some, like dtd_set_type_ctl_stride(), that contains lots
of parameters. Hmm, okay, dtd_set_type_ctl_stride() is called in two
places, so at least in that case it makes sense to have that helper
func. But dtd_set_type_ctl_stride() uses DTD_W0(), and that's again the
only place where it's used.

So, I don't know. I'm not suggesting to change anything, I just started
wondering if all those macros and helpers actually help or not.

 Signed-off-by: Archit Taneja arc...@ti.com
 ---
  drivers/media/platform/ti-vpe/vpdma.c  | 269 +++
  drivers/media/platform/ti-vpe/vpdma.h  |  48 ++
  drivers/media/platform/ti-vpe/vpdma_priv.h | 695 
 +
  3 files changed, 1012 insertions(+)
 
 diff --git a/drivers/media/platform/ti-vpe/vpdma.c 
 b/drivers/media/platform/ti-vpe/vpdma.c
 index b15b3dd..b957381 100644
 --- a/drivers/media/platform/ti-vpe/vpdma.c
 +++ b/drivers/media/platform/ti-vpe/vpdma.c
 @@ -21,6 +21,7 @@
  #include linux/platform_device.h
  #include linux/sched.h
  #include linux/slab.h
 +#include linux/videodev2.h
  
  #include vpdma.h
  #include vpdma_priv.h
 @@ -425,6 +426,274 @@ int vpdma_submit_descs(struct vpdma_data *vpdma, struct 
 vpdma_desc_list *list)
   return 0;
  }
  
 +static void dump_cfd(struct vpdma_cfd *cfd)
 +{
 + int class;
 +
 + class = cfd_get_class(cfd);
 +
 + pr_debug(config descriptor of payload class: %s\n,
 + class == CFD_CLS_BLOCK ? simple block :
 + address data block);
 +
 + if (class == CFD_CLS_BLOCK)
 + pr_debug(word0: dst_addr_offset = 0x%08x\n,
 + cfd_get_dest_addr_offset(cfd));
 +
 + if (class == CFD_CLS_BLOCK)
 + pr_debug(word1: num_data_wrds = %d\n, cfd_get_block_len(cfd));
 +
 + pr_debug(word2: payload_addr = 0x%08x\n, cfd_get_payload_addr(cfd));
 +
 + pr_debug(word3: pkt_type = %d, direct = %d, class = %d, dest = %d, 
 + payload_len = %d\n, cfd_get_pkt_type(cfd),
 + cfd_get_direct(cfd), class, cfd_get_dest(cfd),
 + cfd_get_payload_len(cfd));
 +}

There's quite a bit of code in these dump functions, and they are always
called. I'm sure getting that data is good for debugging, but I presume
they are quite useless for normal use. So I think they should be
compiled in only if some Kconfig option is selected.

 +/*
 + * data transfer descriptor
 + *
 + * All fields are 32 bits to make them endian neutral

What does that mean? Why would 32bit fields make it endian neutral?

 + */
 +struct vpdma_dtd {
 + u32 type_ctl_stride;
 + union {
 + u32 xfer_length_height;
 + u32 w1;
 + };
 + dma_addr_t  start_addr;
 + u32 pkt_ctl;
 + union {
 + u32 frame_width_height; /* inbound */
 + dma_addr_t  desc_write_addr;/* outbound */

Are you sure dma_addr_t is always 32 bit?

 + };
 + union {
 + u32 start_h_v;  /* inbound */
 + u32 max_width_height;   /* outbound */
 + };
 + u32 client_attr0;
 + u32 client_attr1;
 +};

I'm not sure if I understand the struct right, but presuming this one
struct is used for both writing and reading, and certain set of fields
is used for writes and 

Re: [PATCH 3/6] v4l: ti-vpe: Add VPE mem to mem driver

2013-08-05 Thread Tomi Valkeinen
On 02/08/13 17:03, Archit Taneja wrote:
 VPE is a block which consists of a single memory to memory path which can
 perform chrominance up/down sampling, de-interlacing, scaling, and color space
 conversion of raster or tiled YUV420 coplanar, YUV422 coplanar or YUV422
 interleaved video formats.
 
 We create a mem2mem driver based primarily on the mem2mem-testdev example.
 The de-interlacer, scaler and color space converter are all bypassed for now
 to keep the driver simple. Chroma up/down sampler blocks are implemented, so
 conversion beteen different YUV formats is possible.
 
 Each mem2mem context allocates a buffer for VPE MMR values which it will use
 when it gets access to the VPE HW via the mem2mem queue, it also allocates
 a VPDMA descriptor list to which configuration and data descriptors are added.
 
 Based on the information received via v4l2 ioctls for the source and
 destination queues, the driver configures the values for the MMRs, and stores
 them in the buffer. There are also some VPDMA parameters like frame start and
 line mode which needs to be configured, these are configured by direct 
 register
 writes via the VPDMA helper functions.
 
 The driver's device_run() mem2mem op will add each descriptor based on how the
 source and destination queues are set up for the given ctx, once the list is
 prepared, it's submitted to VPDMA, these descriptors when parsed by VPDMA will
 upload MMR registers, start DMA of video buffers on the various input and 
 output
 clients/ports.
 
 When the list is parsed completely(and the DMAs on all the output ports done),
 an interrupt is generated which we use to notify that the source and 
 destination
 buffers are done.
 
 The rest of the driver is quite similar to other mem2mem drivers, we use the
 multiplane v4l2 ioctls as the HW support coplanar formats.
 
 Signed-off-by: Archit Taneja arc...@ti.com
 ---
  drivers/media/platform/Kconfig   |   10 +
  drivers/media/platform/Makefile  |2 +
  drivers/media/platform/ti-vpe/vpe.c  | 1763 
 ++
  drivers/media/platform/ti-vpe/vpe_regs.h |  496 +
  4 files changed, 2271 insertions(+)
  create mode 100644 drivers/media/platform/ti-vpe/vpe.c
  create mode 100644 drivers/media/platform/ti-vpe/vpe_regs.h

Just two quick comments, the same as to an earlier patch: consts for
tables, and write instead of insert.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH 1/6] v4l: ti-vpe: Create a vpdma helper library

2013-08-05 Thread Tomi Valkeinen
On 05/08/13 14:26, Archit Taneja wrote:
 On Monday 05 August 2013 01:43 PM, Tomi Valkeinen wrote:

 +/*
 + * submit a list of DMA descriptors to the VPE VPDMA, do not wait
 for completion
 + */
 +int vpdma_submit_descs(struct vpdma_data *vpdma, struct
 vpdma_desc_list *list)
 +{
 +/* we always use the first list */
 +int list_num = 0;
 +int list_size;
 +
 +if (vpdma_list_busy(vpdma, list_num))
 +return -EBUSY;
 +
 +/* 16-byte granularity */
 +list_size = (list-next - list-buf.addr)  4;
 +
 +write_reg(vpdma, VPDMA_LIST_ADDR, (u32) list-buf.dma_addr);
 +wmb();

 What is the wmb() for?
 
 VPDMA_LIST_ADDR needs to be written before VPDMA_LIST_ATTR, otherwise
 VPDMA doesn't work. wmb() ensures the ordering.

Are you sure it's needed? Here's an interesting thread about writing and
reading to registers: http://marc.info/?t=13058859492r=1w=2

 +
 +for (i = 0; i  100; i++) {/* max 1 second */
 +msleep_interruptible(10);

 You call interruptible version here, but you don't handle the
 interrupted case. I believe the loop will just continue looping, even if
 the user interrupted.
 
 Okay. I think I don't understand the interruptible version correctly. We
 don't need to msleep_interruptible here, we aren't waiting on any wake
 up event, we just want to wait till a bit gets set.

Well, I think the interruptible versions should be used when the user
(wel, userspace program) initiates the action. The user should have the
option to interrupt a possibly long running operation, which is what
msleep_interruptible() makes possible.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH 2/6] v4l: ti-vpe: Add helpers for creating VPDMA descriptors

2013-08-05 Thread Tomi Valkeinen
On 05/08/13 15:05, Archit Taneja wrote:
 On Monday 05 August 2013 02:41 PM, Tomi Valkeinen wrote:

 There's quite a bit of code in these dump functions, and they are always
 called. I'm sure getting that data is good for debugging, but I presume
 they are quite useless for normal use. So I think they should be
 compiled in only if some Kconfig option is selected.
 
 Won't pr_debug() functions actually print something only when
 CONFIG_DYNAMIC_DEBUG is selected or if the DEBUG is defined? They will

If DEBUG is defined, they are always printed. If dynamic debug is in
use, the user has to enable debug prints for VPE for the dumps to be
printed.

 still consume a lot of code, but it would just end up in dummy printk
 calls, right?

Yes.

Well, I don't know VPE, so I can't really say how much those prints are
needed or not. They just looked very verbose to me.

I think we should have normal level debugging messages compiled in by
default, and for verbose there should be a separate compile options.
With verbose I mean something that may be useful if you are changing the
code and want to verify it or debugging some very odd bug. I.e. for the
developer of the driver.

And with normal something that would be used when, say, somebody uses
VPE for in his app, but things don't seem to be quite right, and there's
need to get some info on what is going on. I.e. for normal user.

But that's just my opinion, and it's obviously difficult to define those
clearly =). To be honest, I don't know how much overhead very verbose
kernel debug prints even cause. Maybe it's negligible.

 +/*
 + * data transfer descriptor
 + *
 + * All fields are 32 bits to make them endian neutral

 What does that mean? Why would 32bit fields make it endian neutral?
 
 
 Each 32 bit field describes one word of the data descriptor. Each
 descriptor has a number of parameters.
 
 If we look at the word 'xfer_length_height'. It's composed of height
 (from bits 15:0) and width(from bits 31:16). If the word was expressed
 using bit fields, we can describe the word(in big endian) as:
 
 struct vpdma_dtd {
 ...
 unsigned intxfer_width:16;
 unsigned intxfer_height:16;
 ...
 ...
 };
 
 and in little endian as:
 
 struct vpdma_dtd {
 ...
 unsigned intxfer_height:16;
 unsigned intxfer_width:16;
 ...
 ...
 };
 
 So this representation makes it endian dependent. Maybe the comment
 should be improved saying that usage of u32 words instead of bit fields
 prevents endian issues.

No, I don't think that's correct. Endianness is about bytes, not 16 bit
words. The above text doesn't make much sense to me.

I haven't really worked with endiannes issues, but maybe __le32 and
others should be used in the struct, if that struct is read by the HW.
And use cpu_to_le32()  others to write those. But googling will
probably give more info (I should read also =).

 + */
 +struct vpdma_dtd {
 +u32type_ctl_stride;
 +union {
 +u32xfer_length_height;
 +u32w1;
 +};
 +dma_addr_tstart_addr;
 +u32pkt_ctl;
 +union {
 +u32frame_width_height;/* inbound */
 +dma_addr_tdesc_write_addr;/* outbound */

 Are you sure dma_addr_t is always 32 bit?
 
 I am not sure about this.

Is this struct directly read by the HW, or written to HW? If so, I
believe using dma_addr_t is very wrong here. Having a typedef like
dma_addr_t hides the actual type used for it. So even if it currently
would always be 32bit, there's no guarantee.


 +};
 +union {
 +u32start_h_v;/* inbound */
 +u32max_width_height;/* outbound */
 +};
 +u32client_attr0;
 +u32client_attr1;
 +};

 I'm not sure if I understand the struct right, but presuming this one
 struct is used for both writing and reading, and certain set of fields
 is used for writes and other set for reads, would it make sense to have
 two different structs, instead of using unions? Although they do have
 many common fields, and the unions are a bit scattered there, so I don't
 know if that would be cleaner...
 
 It helps in a having a common debug function, I don't see much benefit
 apart from that. I'll see if it's better to have them as separate structs.

Ok. Does the struct have any bit or such that tells us if the current
data is inbound or outbound?

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [omapdss] fault in dispc_write_irqenable [was: Re: [omap3isp] xclk deadlock]

2013-07-29 Thread Tomi Valkeinen
On 26/07/13 18:37, Jakub Piotr Cłapa wrote:

 Using omapfb, or...? I hope not
 omap_vout, because that's rather unmaintained =).
 
 Laurent's live application is using the V4L2 API for video output (to
 get free YUV conversion and DMA) so I guess this unfortunatelly counts
 as using omap_vout. Are there any alternatives I should look into? IIUC

Ok. Do you have a call trace for the dispc_write_irqenable crash? Maybe
it's something simple to fix.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [omap3isp] xclk deadlock

2013-07-26 Thread Tomi Valkeinen
On 17/07/13 15:50, Laurent Pinchart wrote:
 Hi Jakub,
 
 (CC'ing Tomi Valkeinen)
 
 On Friday 12 July 2013 16:44:44 Jakub Piotr Cłapa wrote:

 2. When exiting from live the kernel hangs more often then not
 (sometimes it is accompanied by Unhandled fault: external abort on
 non-linefetch in dispc_write_irqenable in omapdss).
 
 I'll pass this one to Tomi :-)

Sounds like something is enabling/disabling dispc interrupts after the
clocks have already been turned off.

So what's the context here? What kernel? Using omapfb, or...? I hope not
omap_vout, because that's rather unmaintained =).

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH v17 2/7] video: add display_timing and videomode

2013-02-27 Thread Tomi Valkeinen
Ping.

On 2013-02-18 16:09, Tomi Valkeinen wrote:
 Hi Steffen,
 
 On 2013-01-25 11:01, Steffen Trumtrar wrote:
 
 +/* VESA display monitor timing parameters */
 +#define VESA_DMT_HSYNC_LOW  BIT(0)
 +#define VESA_DMT_HSYNC_HIGH BIT(1)
 +#define VESA_DMT_VSYNC_LOW  BIT(2)
 +#define VESA_DMT_VSYNC_HIGH BIT(3)
 +
 +/* display specific flags */
 +#define DISPLAY_FLAGS_DE_LOWBIT(0)  /* data enable flag */
 +#define DISPLAY_FLAGS_DE_HIGH   BIT(1)
 +#define DISPLAY_FLAGS_PIXDATA_POSEDGE   BIT(2)  /* drive data on pos. 
 edge */
 +#define DISPLAY_FLAGS_PIXDATA_NEGEDGE   BIT(3)  /* drive data on neg. 
 edge */
 +#define DISPLAY_FLAGS_INTERLACEDBIT(4)
 +#define DISPLAY_FLAGS_DOUBLESCANBIT(5)
 
 snip
 
 +unsigned int dmt_flags; /* VESA DMT flags */
 +unsigned int data_flags; /* video data flags */
 
 Why did you go for this approach? To be able to represent
 true/false/not-specified?
 
 Would it be simpler to just have flags field? What does it give us to
 have those two separately?
 
 Should the above say raising edge/falling edge instead of positive
 edge/negative edge?
 
  Tomi
 




signature.asc
Description: OpenPGP digital signature


Re: [PATCH v17 2/7] video: add display_timing and videomode

2013-02-27 Thread Tomi Valkeinen
On 2013-02-27 18:05, Steffen Trumtrar wrote:
 Ah, sorry. Forgot to answer this.
 
 On Wed, Feb 27, 2013 at 05:45:31PM +0200, Tomi Valkeinen wrote:
 Ping.

 On 2013-02-18 16:09, Tomi Valkeinen wrote:
 Hi Steffen,

 On 2013-01-25 11:01, Steffen Trumtrar wrote:

 +/* VESA display monitor timing parameters */
 +#define VESA_DMT_HSYNC_LOWBIT(0)
 +#define VESA_DMT_HSYNC_HIGH   BIT(1)
 +#define VESA_DMT_VSYNC_LOWBIT(2)
 +#define VESA_DMT_VSYNC_HIGH   BIT(3)
 +
 +/* display specific flags */
 +#define DISPLAY_FLAGS_DE_LOW  BIT(0)  /* data enable flag */
 +#define DISPLAY_FLAGS_DE_HIGH BIT(1)
 +#define DISPLAY_FLAGS_PIXDATA_POSEDGE BIT(2)  /* drive data on pos. 
 edge */
 +#define DISPLAY_FLAGS_PIXDATA_NEGEDGE BIT(3)  /* drive data on neg. 
 edge */
 +#define DISPLAY_FLAGS_INTERLACED  BIT(4)
 +#define DISPLAY_FLAGS_DOUBLESCAN  BIT(5)

 snip

 +  unsigned int dmt_flags; /* VESA DMT flags */
 +  unsigned int data_flags; /* video data flags */

 Why did you go for this approach? To be able to represent
 true/false/not-specified?

 
 We decided somewhere between v3 and v8 (I think), that those flags can be
 high/low/ignored.

Okay. Why aren't they enums, though? That always makes more clear which
defines are to be used with which fields.

 Would it be simpler to just have flags field? What does it give us to
 have those two separately?

 
 I decided to split them, so it is clear that some flags are VESA defined and
 the others are invented for the display-timings framework and may be
 extended.

Hmm... Okay. Is it relevant that they are VESA defined? It just feels to
complicate handling the flags =).

 Should the above say raising edge/falling edge instead of positive
 edge/negative edge?

 
 Hm, I used posedge/negedge because it is shorter (and because of my Verilog 
 past
 pretty natural to me :-) ). I don't know what others are thinking though.

I guess it's quite clear, but it's still different terms than used
elsewhere, e.g. documentation for videomodes.

Another thing I noticed while using the new videomode, display_timings.h
has a few names that are quite short and generic. Like TE_MIN, which
is now a global define. And timing_entry. Either name could be well
used internally in some .c file, and could easily clash.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH v17 2/7] video: add display_timing and videomode

2013-02-18 Thread Tomi Valkeinen
Hi Steffen,

On 2013-01-25 11:01, Steffen Trumtrar wrote:

 +/* VESA display monitor timing parameters */
 +#define VESA_DMT_HSYNC_LOW   BIT(0)
 +#define VESA_DMT_HSYNC_HIGH  BIT(1)
 +#define VESA_DMT_VSYNC_LOW   BIT(2)
 +#define VESA_DMT_VSYNC_HIGH  BIT(3)
 +
 +/* display specific flags */
 +#define DISPLAY_FLAGS_DE_LOW BIT(0)  /* data enable flag */
 +#define DISPLAY_FLAGS_DE_HIGHBIT(1)
 +#define DISPLAY_FLAGS_PIXDATA_POSEDGEBIT(2)  /* drive data on pos. 
 edge */
 +#define DISPLAY_FLAGS_PIXDATA_NEGEDGEBIT(3)  /* drive data on neg. 
 edge */
 +#define DISPLAY_FLAGS_INTERLACED BIT(4)
 +#define DISPLAY_FLAGS_DOUBLESCAN BIT(5)

snip

 + unsigned int dmt_flags; /* VESA DMT flags */
 + unsigned int data_flags; /* video data flags */

Why did you go for this approach? To be able to represent
true/false/not-specified?

Would it be simpler to just have flags field? What does it give us to
have those two separately?

Should the above say raising edge/falling edge instead of positive
edge/negative edge?

 Tomi

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: AW: omapdss/omap3isp/omapfb: Picture from omap3isp can't recover after a blank/unblank (or overlay disables after resuming)

2013-02-14 Thread Tomi Valkeinen
On 2013-02-14 11:30, Florian Neuhaus wrote:
 Hi Tomi,
 
 Tomi Valkeinen wrote on 2013-02-07:
 
 FIFO underflow means that the DSS hardware wasn't able to fetch enough 
 pixel data in time to output them to the panel. Sometimes this happens 
 because of plain misconfiguration, but usually it happens because of 
 the hardware just can't do things fast enough with the configuration 
 the user has set.

 In this case I see that you are using VRFB rotation on fb0, and the 
 rotation is
 270 degrees. Rotating the fb is heavy, especially 90 and 270 degrees. 
 It may be that when the DSS is resumed, there's a peak in the mem 
 usage as DSS suddenly needs to fetch lots of data.

 Another issue that could be involved is power management. After the 
 DSS is suspended, parts of OMAP may be put to sleep. When the DSS is 
 resumed, these parts need to be woken up, and it may be that there's a 
 higher mem latency for a short period of time right after resume. 
 Which could again cause DSS not getting enough pixel data.

 You say the issue doesn't happen if you disable fb0. What happens if 
 you disable fb0, blank the screen, then unblank the screen, and after 
 that enable fb0 again?
 
 By disable fb0 do you mean disconnect fb0 from ovl0 or disable ovl0?
 I have done both:
 http://pastebin.com/Bxm1Z2RY
 
 This works as expected.

I think both disconnecting fb0 and ovl0, and disabling ovl0 end up doing
the same, which is disabling ovl0. Which is what I meant.

So, if I understand correctly, this only happens at unblank, and can be
circumvented by temporarily keeping ovl0 disabled during the unblank,
and enabling ovl0 afterwards works fine.

So for some reason the time of unblank is extra heavy for the memory bus.

Archit, I have a feeling that enabling the LCD is heavier on the memory
bus than what happens at VBLANK, even if both start fetching the pixels
for a fresh frame. You've been twiddling with the FIFO stuff, am I right
there?

 Further tests I have done:
 
 Enable fb1/ovl1 and hit some keys on the keyboard to let fb0/ovl0 update in 
 the
 background causes a fifo underflow too:
 http://pastebin.com/f3JnMLsV
 
 This happens only, if I enable the vrfb (rotate=3). So the whole thing
 seems to be a rotation issue. Do you have some hints to trace down
 the problem?

Not rotation issue as such, but memory bandwidth issue.

 How about if you disable VRFB rotation, either totally, or set the 
 rotation to 0 or 180 degrees?
 
 Disable rotation is not an option for me, as we have a wrong oriented
 portrait display with 480x800 which we must use in landscape mode...

I understand, I only meant that for testing purposes. VRFB rotation with
0 and 180 cause a slight impact on the mem bus, whereas 90 and 270
rotation cause a large impact. Also, as I mentioned earlier, the PM may
also affect this, as things may have been shut down in the OMAP. So
disabling PM related features could also fix the problem.

In many cases underflows are rather hard to debug and solve. There are
things in the DSS hardware like FIFO thresholds and prefetch, and VRFB
tile sizes, which can be changed (although unfortunately only by
modifying the drivers). How they should be changed if a difficult
question, though, and whether it'll help is also a question mark.

If you want to tweak those, I suggest you study them from the TRM.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: AW: omapdss/omap3isp/omapfb: Picture from omap3isp can't recover after a blank/unblank (or overlay disables after resuming)

2013-02-14 Thread Tomi Valkeinen
On 2013-02-14 13:07, Laurent Pinchart wrote:

 In many cases underflows are rather hard to debug and solve. There are
 things in the DSS hardware like FIFO thresholds and prefetch, and VRFB
 tile sizes, which can be changed (although unfortunately only by
 modifying the drivers). How they should be changed if a difficult
 question, though, and whether it'll help is also a question mark.
 
 Naive question here, instead of killing the overlay completely when an 
 underflow happens, couldn't the DSS driver somehow recover from that 
 condition 
 by restarting whatever needs to be restarted ?

Yes. Killing the overlay is just the safest choice. Presumably if an
underflow happens, the problem is still there, and it'll just happen
again if you re-enable the overlay. Obviously this is not always the
case, as this problem at hand shows.

There's much to improve with the DSS driver's error handling, though. I
think first step would be to remove it totally from DSS, and move it to
omapfb/omapdrm. It's a bit difficult to handle the errors at the lowest
level.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC v2 0/5] Common Display Framework

2013-02-08 Thread Tomi Valkeinen
On 2013-02-04 12:05, Marcus Lorentzon wrote:

 As discussed at FOSDEM I will give DSI bus with full feature set a
 try.

Please do, but as a reminder I want to raise the issues I see with a DSI
bus:

- A device can be a child of only one bus. So if DSI is used only for
video, the device is a child of, say, i2c bus, and thus there's no DSI
bus. How to configure and use DSI in this case?

- If DSI is used for both control and video, we have two separate APIs
for the bus. What I mean here is that for the video-only case above, we
need a video-only-API for DSI. This API should contain all necessary
methods to configure DSI. But we need similar methods for the control
API also.

So, I hope you come up with some solution for this, but as I see it,
it's easily the most simple and clear option to have one video_source
style entity for the DSI bus itself, which is used for both control and
video.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: CDF meeting @FOSDEM report

2013-02-06 Thread Tomi Valkeinen
On 2013-02-06 16:44, Alex Deucher wrote:
 On Wed, Feb 6, 2013 at 6:11 AM, Tomi Valkeinen tomi.valkei...@ti.com wrote:

 What is an encoder? Something that takes a video signal in, and lets the
 CPU store the received data to memory? Isn't that a decoder?

 Or do you mean something that takes a video signal in, and outputs a
 video signal in another format? (transcoder?)
 
 In KMS parlance, we have two objects a crtc and an encoder.  A crtc
 reads data from memory and produces a data stream with display timing.
  The encoder then takes that datastream and timing from the crtc and
 converts it some sort of physical signal (LVDS, TMDS, DP, etc.).  It's

Isn't the video stream between CRTC and encoder just as physical, it
just happens to be inside the GPU?

This is the case for OMAP, at least, where DISPC could be considered
CRTC, and DSI/HDMI/etc could be considered encoder. The stream between
DISPC and DSI/HDMI is plain parallel RGB signal. The video stream could
as well be outside OMAP.

 not always a perfect match to the hardware.  For example a lot of GPUs
 have a DVO encoder which feeds a secondary encoder like an sil164 DVO
 to TMDS encoder.

Right. I think mapping the DRM entities to CDF ones is one of the bigger
question marks we have with CDF. While I'm no expert on DRM, I think we
have the following options:

1. Force DRM's model to CDF, meaning one encoder.

2. Extend DRM to support multiple encoders in a chain.

3. Support multiple encoders in a chain in CDF, but somehow map them to
a single encoder in DRM side.

I really dislike the first option, as it would severely limit where CDF
can be used, or would force you to write some kind of combined drivers,
so that you can have one encoder driver running multiple encoder devices.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC v2 0/5] Common Display Framework

2012-12-19 Thread Tomi Valkeinen
On 2012-12-19 16:57, Jani Nikula wrote:

 It just seems to me that, at least from a DRM/KMS perspective, adding
 another layer (=CDF) for HDMI or DP (or legacy outputs) would be
 overengineering it. They are pretty well standardized, and I don't see
 there would be a need to write multiple display drivers for them. Each
 display controller has one, and can easily handle any chip specific
 requirements right there. It's my gut feeling that an additional
 framework would just get in the way. Perhaps there could be more common
 HDMI/DP helper style code in DRM to reduce overlap across KMS drivers,
 but that's another thing.
 
 So is the HDMI/DP drivers using CDF a more interesting idea from a
 non-DRM perspective? Or, put another way, is it more of an alternative
 to using DRM? Please enlighten me if there's some real benefit here that
 I fail to see!

The use of CDF is an option, not something that has to be done. A DRM
driver developer may use it if it gives benefit for him for that
particular driver.

I don't know much about desktop display hardware, but I guess that using
CDF would not really give much there. In some cases it could, if the IPs
used on the graphics card are something that are used elsewhere also
(sounds quite unlikely, though). In that case there could be separate
drivers for the IPs.

And note that CDF is not really about the dispc side, i.e. the part that
creates the video stream from pixels in the memory. It's more about the
components after that, and how to connect those components.

 For DSI panels (or DSI-to-whatever bridges) it's of course another
 story. You typically need a panel specific driver. And here I see the
 main point of the whole CDF: decoupling display controllers and the
 panel drivers, and sharing panel (and converter chip) specific drivers
 across display controllers. Making it easy to write new drivers, as
 there would be a model to follow. I'm definitely in favour of coming up
 with some framework that would tackle that.

Right. But if you implement drivers for DSI panels with CDF for, say,
OMAP, I think it's simpler to use CDF also for HDMI/DP on OMAP.
Otherwise it'll be a mishmash with two different models.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC v2 0/5] Common Display Framework

2012-12-19 Thread Tomi Valkeinen
On 2012-12-19 17:26, Rob Clark wrote:
 On Wed, Dec 19, 2012 at 8:57 AM, Jani Nikula
 jani.nik...@linux.intel.com wrote:

 Hi Laurent -

 On Tue, 18 Dec 2012, Laurent Pinchart laurent.pinch...@ideasonboard.com 
 wrote:
 Hi Jani,

 On Monday 17 December 2012 18:53:37 Jani Nikula wrote:
 I can see the need for a framework for DSI panels and such (in fact Tomi
 and I have talked about it like 2-3 years ago already!) but what is the
 story for HDMI and DP? In particular, what's the relationship between
 DRM and CDF here? Is there a world domination plan to switch the DRM
 drivers to use this framework too? ;) Do you have some rough plans how
 DRM and CDF should work together in general?

 There's always a world domination plan, isn't there ? :-)

 I certainly want CDF to be used by DRM (or more accurately KMS). That's what
 the C stands for, common refers to sharing panel and other display entity
 drivers between FBDEV, KMS and V4L2.

 I currently have no plan to expose CDF internals to userspace through the 
 KMS
 API. We might have to do so later if the hardware complexity grows in such a
 way that finer control than what KMS provides needs to be exposed to
 userspace, but I don't think we're there yet. The CDF API will thus only be
 used internally in the kernel by display controller drivers. The KMS core
 might get functions to handle common display entity operations, but the bulk
 of the work will be in the display controller drivers to start with. We will
 then see what can be abstracted in KMS helper functions.

 Regarding HDMI and DP, I imagine HDMI and DP drivers that would use the CDF
 API. That's just a thought for now, I haven't tried to implement them, but 
 it
 would be nice to handle HDMI screens and DPI/DBI/DSI panels in a generic 
 way.

 Do you have thoughts to share on this topic ?

 It just seems to me that, at least from a DRM/KMS perspective, adding
 another layer (=CDF) for HDMI or DP (or legacy outputs) would be
 overengineering it. They are pretty well standardized, and I don't see
 there would be a need to write multiple display drivers for them. Each
 display controller has one, and can easily handle any chip specific
 requirements right there. It's my gut feeling that an additional
 framework would just get in the way. Perhaps there could be more common
 HDMI/DP helper style code in DRM to reduce overlap across KMS drivers,
 but that's another thing.

 So is the HDMI/DP drivers using CDF a more interesting idea from a
 non-DRM perspective? Or, put another way, is it more of an alternative
 to using DRM? Please enlighten me if there's some real benefit here that
 I fail to see!
 
 fwiw, I think there are at least a couple cases where multiple SoC's
 have the same HDMI IP block.
 
 And, there are also external HDMI encoders (for example connected over
 i2c) that can also be shared between boards.  So I think there will be
 a number of cases where CDF is appropriate for HDMI drivers.  Although
 trying to keep this all independent of DRM (as opposed to just
 something similar to what drivers/gpu/i2c is today) seems a bit
 overkill for me.  Being able to use the helpers in drm and avoiding an
 extra layer of translation seems like the better option to me.  So my
 vote would be drivers/gpu/cdf.

Well, we need to think about that. I would like to keep CDF independent
of DRM. I don't like tying different components/frameworks together if
there's no real need for that.

Also, something that Laurent mentioned in our face-to-face discussions:
Some IPs/chips can be used for other purposes than with DRM.

He had an example of a board, that (if I understood right) gets video
signal from somewhere outside the board, processes the signal with some
IPs/chips, and then outputs the signal. So there's no framebuffer, and
the image is not stored anywhere. I think the framework used in these
cases is always v4l2.

The IPs/chips in the above model may be the exact same IPs/chips that
are used with normal display. If the CDF was tied to DRM, using the
same drivers for normal and these streaming cases would probably not be
possible.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC v2 0/5] Common Display Framework

2012-12-17 Thread Tomi Valkeinen
On 2012-12-17 16:36, Laurent Pinchart wrote:
 Hi Tomi,
 
 I finally have time to work on a v3 :-)
 
 On Friday 23 November 2012 16:51:37 Tomi Valkeinen wrote:
 On 2012-11-22 23:45, Laurent Pinchart wrote:
 From: Laurent Pinchart laurent.pinchart+rene...@ideasonboard.com

 Hi everybody,

 Here's the second RFC of what was previously known as the Generic Panel
 Framework.

 Nice work! Thanks for working on this.

 I was doing some testing with the code, seeing how to use it in omapdss.
 Here are some thoughts:

 In your model the DSS gets the panel devices connected to it from
 platform data. After the DSS and the panel drivers are loaded, DSS gets
 a notification and connects DSS and the panel.

 I think it's a bit limited way. First of all, it'll make the DT data a
 bit more complex (although this is not a major problem). With your
 model, you'll need something like:

 soc-base.dtsi:

 dss {
  dpi0: dpi {
  };
 };

 board.dts:

 dpi0 {
  panel = dpi-panel;
 };

 / {
  dpi-panel: dpi-panel {
  ...panel data...;
  };
 };

 Second, it'll prevent hotplug, and even if real hotplug would not be
 supported, it'll prevent cases where the connected panel must be found
 dynamically (like reading ID from eeprom).
 
 Hotplug definitely needs to be supported, as the common display framework 
 also 
 targets HDMI and DP. The notification mechanism was actually designed to 
 support hotplug.

HDMI or DP hotplug may or may not be a different thing than what I talk
about here. We may have two kinds of hotplug: real linux device hotplug,
i.e. a linux device appears or is removed during runtime, or just a
cable hotplug, handled inside a driver, which doesn't have any effect on
the linux devices.

If we do implement HDMI and DP monitors with real linux drivers, then
yes, we could use real hotplug. But we could as well have the monitor
driver always registered, and just have a driver internal cable-hotplug
system.

To be honest, I'm not sure if implementing real hotplug is easily
possible, as we don't have real, probable (probe-able =) busses. So even
if we'd get a hotplug event of a new display device, what kind of device
would the bus master register? It has no way to know that.

 How do you see the proposal preventing hotplug ?

Well, probably it doesn't prevent. But it doesn't feel right to me.

Say, if we have a DPI panel, controlled via foo-bus, which has a probing
mechanism. When the foo-bus master detects a new hardware device, it'll
create linux device for it. The driver for this device will then be
probed. In the probe function it should somehow register itself to the
cdf, or perhaps the previous entity in the chain.

This sounds to me that the link is from the panel to the previous
entity, not the other way around as you describe, and also the previous
entity doesn't know of the panel entities.

 Third, it kinda creates a cyclical dependency: the DSS needs to know
 about the panel and calls ops in the panel, and the panel calls ops in
 the DSS. I'm not sure if this is an actual problem, but I usually find
 it simpler if calls are done only in one direction.
 
 I don't see any way around that. The panel is not a standalone entity that 
 can 
 only receive calls (as it needs to control video streams, per your request 
 :-)) or only emit calls (as something needs to control it, userspace doesn't 
 control the panel directly).

Right, but as I see it, the destination of the panel's calls, and the
source of the calls to panel are different things. The destination is
the bus layer, dealing with the video signal being transferred. The
source is a bit higher level thing, something that's controlling the
display in general.

 Here we wouldn't have similar display_entity as you have, but video sources
 and displays. Video sources are elements in the video pipeline, and a video
 source is used only by the next downstream element. The last element in the
 pipeline would not be a video source, but a display, which would be used by
 the upper layer.
 
 I don't think we should handle pure sources, pure sinks (displays) and mixed 
 entities (transceivers) differently. I prefer having abstract entities that 
 can have a source and a sink, and expose the corresponding operations. That 
 would make pipeline handling much easier, as the code will only need to deal 
 with a single type of object. Implementing support for entities with multiple 
 sinks and/or sources would also be possible.

Ok. I think having pure sources is simpler model, but it's true that if
we need to iterate and study the pipeline during runtime, it's probably
better to have single entities with multiple sources/sinks.

 Video source's ops would deal with things related to the video bus in
 question, like configuring data lanes, sending DSI packets, etc. The
 display ops would be more high level things, like enable, update, etc.
 Actually, I guess you could consider the display to represent and deal
 with the whole pipeline, while

Re: [PATCHv15 3/7] video: add of helper for display timings/videomode

2012-12-10 Thread Tomi Valkeinen
On 2012-12-07 16:12, Philipp Zabel wrote:
 Hi,
 
 Am Montag, den 26.11.2012, 18:56 +0200 schrieb Tomi Valkeinen:

 So what does the pixelclk-inverted mean? Normally the SoC drives pixel
 data on rising edge, and the panel samples it at falling edge? And
 vice-versa for inverted? Or the other way around?

 When is hsync/vsync set? On rising or falling edge of pclk?

 My point here is that the pixelclk-inverted is not crystal clear thing,
 like the hsync/vsync/de-active values are.

 And while thinking about this, I realized that the meaning of
 pixelclk-inverted depends on what component is it applied to. Presuming
 normal pixclk means pixel data on rising edge, the meaning of that
 depends on do we consider the SoC or the panel. The panel needs to
 sample the data on the other edge from the one the SoC uses to drive the
 data.

 Does the videomode describe the panel, or does it describe the settings
 programmed to the SoC?
 
 How about calling this property pixelclk-active, active high meaning
 driving pixel data on rising edges and sampling on falling edges (the
 pixel clock is high between driving and sampling the data), and active
 low meaning driving on falling edges and sampling on rising edges?
 It is the same from the SoC perspective and from the panel perspective,
 and it mirrors the usage of the other *-active properties.

This sounds good to me. It's not quite correct, as neither pixelclock or
pixel data are not really active when the clock is high/low, but it
still makes sense and is clear (at least with a short description).

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCHv15 2/7] video: add display_timing and videomode

2012-12-07 Thread Tomi Valkeinen
On 2012-12-06 12:07, Grant Likely wrote:
 On Mon, 26 Nov 2012 16:39:58 +0100, Steffen Trumtrar 
 s.trumt...@pengutronix.de wrote:
 On Mon, Nov 26, 2012 at 02:37:26PM +0200, Tomi Valkeinen wrote:
 On 2012-11-26 11:07, Steffen Trumtrar wrote:

 +/*
 + * Subsystem independent description of a videomode.
 + * Can be generated from struct display_timing.
 + */
 +struct videomode {
 +  u32 pixelclock; /* pixelclock in Hz */

 I don't know if this is of any importance, but the linux clock framework
 manages clock rates with unsigned long. Would it be better to use the
 same type here?


 Hm, I don't know. Anyone? u32 should be large enough for a pixelclock.
 
 4GHz is a pretty large pixel clock. I have no idea how conceivable it is
 that hardware will get to that speed. However, if it will ever be
 larger, then you'll need to account for that in the DT binding so that
 the pixel clock can be specified using 2 cells.

I didn't mention the type because of the size of the field, but only
because to me it makes sense to use the same type for clock rates all
around the kernel. In many cases the value will be passed to clk_set_rate().

I can't see any real issues with u32, though.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC v2 3/5] video: display: Add MIPI DBI bus support

2012-11-30 Thread Tomi Valkeinen
Hi,

On 2012-11-22 23:45, Laurent Pinchart wrote:
 From: Laurent Pinchart laurent.pinchart+rene...@ideasonboard.com
 
 Signed-off-by: Laurent Pinchart laurent.pinch...@ideasonboard.com
 ---
  drivers/video/display/Kconfig|4 +
  drivers/video/display/Makefile   |1 +
  drivers/video/display/mipi-dbi-bus.c |  228 
 ++
  include/video/display.h  |5 +
  include/video/mipi-dbi-bus.h |  125 +++
  5 files changed, 363 insertions(+), 0 deletions(-)
  create mode 100644 drivers/video/display/mipi-dbi-bus.c
  create mode 100644 include/video/mipi-dbi-bus.h

I've been doing some omapdss testing with CDF and DSI, and I have some
thoughts about the bus stuff. I already told these to Laurent, but I'll
write them to the mailing list also for discussion.

So with the current CDF model we have separate control and video buses.
The control bus is represented as proper Linux bus, and video bus is
represented via custom display_entity. This sounds good on paper, and I
also agreed to this approach when we were planning CDF.

However, now I doubt that approach.

First, I want to list some examples of devices with different bus
configurations:

1) Panel without any control, only video bus
2) Panel with separate control and video buses, e.g. i2c for control,
DPI for video
3) Panel with the same control and video buses, like DSI or DBI.

The first one is simple, it's just a platform device. No questions there.

The second one can be a bit tricky. Say, if we have a panel controlled
via i2c, and DSI/DBI used for video. The problem here is that with the
current model, DSI/DBI would be represented as a real bus, for control.
But in this case there's only the video path.

So if all the DSI/DBI bus configuration is handled on the DSI/DBI
control bus side, how can it be handled with only the video bus? And if
we add the same bus configuration to the video bus side as we have on
control bus side, then we have duplicated the API, and it's also
somewhat confusing. I don't have any good suggestion for this.

Third one is kinda clear, but I feel slightly uneasy about it. In theory
we can have separate control and video buses, which use the same HW
transport. However, I feel that we'll have some trouble with the
implementation, as we'll then have two more or less independent users
for the HW transport. I can't really point out why this would not be
possible to implement, but I have a gut feeling that it will be
difficult, at least for DSI.

So I think my question is: what does it give us to have separate control
and video buses, and what does the Linux bus give us with the control bus?

I don't see us ever having a case where a device would use one of the
display buses only for control. So either the display bus is only used
for video, or it's used for both control and video. And the display bus
is always 1-to-1, so we're talking about really simple bus here.

I believe things would be much simpler if we just have one entity for
the display buses, which support both video and (when available)
control. What would be the downsides of this approach versus the current
CDF proposal?

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH 0/2] omap_vout: remove cpu_is_* uses

2012-11-29 Thread Tomi Valkeinen
On 2012-11-28 17:13, Laurent Pinchart wrote:
 Hi Tomi,
 
 On Monday 12 November 2012 15:33:38 Tomi Valkeinen wrote:
 Hi,

 This patch removes use of cpu_is_* funcs from omap_vout, and uses omapdss's
 version instead. The other patch removes an unneeded plat/dma.h include.

 These are based on current omapdss master branch, which has the omapdss
 version code. The omapdss version code is queued for v3.8. I'm not sure
 which is the best way to handle these patches due to the dependency to
 omapdss. The easiest option is to merge these for 3.9.

 There's still the OMAP DMA use in omap_vout_vrfb.c, which is the last OMAP
 dependency in the omap_vout driver. I'm not going to touch that, as it
 doesn't look as trivial as this cpu_is_* removal, and I don't have much
 knowledge of the omap_vout driver.

 Compiled, but not tested.
 
 Tested on a Beagleboard-xM.
 
 Tested-by: Laurent Pinchart laurent.pinch...@ideasonboard.com
 Acked-by: Laurent Pinchart laurent.pinch...@ideasonboard.com

Thanks.

 The patches depend on unmerged OMAP DSS patches. Would you like to push this 
 series through linuxtv or through your DSS tree ? The later might be easier, 
 depending on when the required DSS patches will hit mainline.

The DSS patches will be merged for 3.8. I can take this via the omapdss
tree, as there probably won't be any conflicts with other v4l2 stuff.

Or, we can just delay these until 3.9. These patches remove omap
platform dependencies, helping the effort to get common ARM kernel.
However, as there's still the VRFB code in the omap_vout driver, the
dependency remains. Thus, in way, these patches alone don't help
anything, and we could delay these for 3.9 and hope that
omap_vout_vrfb.c gets converted also for that merge window.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCH 0/2] omap_vout: remove cpu_is_* uses

2012-11-29 Thread Tomi Valkeinen
On 2012-11-29 19:05, Mauro Carvalho Chehab wrote:
 Em Thu, 29 Nov 2012 17:39:45 +0100
 Laurent Pinchart laurent.pinch...@ideasonboard.com escreveu:

 Please rather queue the cpu_is_omap removal to v3.8 so I can
 remove plat/cpu.h for omap2+.

 In that case the patches should go through the DSS tree. Mauro, are you fine 
 with that ?
 
 Sure.
 
 For both patches:
 
 Acked-by: Mauro Carvalho Chehab mche...@redhat.com

Okay, thanks, I'll apply them to omapdss tree.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC v2 2/5] video: panel: Add DPI panel support

2012-11-27 Thread Tomi Valkeinen
Hi,

On 2012-11-22 23:45, Laurent Pinchart wrote:

 +static void panel_dpi_release(struct display_entity *entity)
 +{
 + struct panel_dpi *panel = to_panel_dpi(entity);
 +
 + kfree(panel);
 +}
 +
 +static int panel_dpi_remove(struct platform_device *pdev)
 +{
 + struct panel_dpi *panel = platform_get_drvdata(pdev);
 +
 + platform_set_drvdata(pdev, NULL);
 + display_entity_unregister(panel-entity);
 +
 + return 0;
 +}
 +
 +static int __devinit panel_dpi_probe(struct platform_device *pdev)
 +{
 + const struct panel_dpi_platform_data *pdata = pdev-dev.platform_data;
 + struct panel_dpi *panel;
 + int ret;
 +
 + if (pdata == NULL)
 + return -ENODEV;
 +
 + panel = kzalloc(sizeof(*panel), GFP_KERNEL);
 + if (panel == NULL)
 + return -ENOMEM;
 +
 + panel-pdata = pdata;
 + panel-entity.dev = pdev-dev;
 + panel-entity.release = panel_dpi_release;
 + panel-entity.ops.ctrl = panel_dpi_control_ops;
 +
 + ret = display_entity_register(panel-entity);
 + if (ret  0) {
 + kfree(panel);
 + return ret;
 + }
 +
 + platform_set_drvdata(pdev, panel);
 +
 + return 0;
 +}
 +
 +static const struct dev_pm_ops panel_dpi_dev_pm_ops = {
 +};
 +
 +static struct platform_driver panel_dpi_driver = {
 + .probe = panel_dpi_probe,
 + .remove = panel_dpi_remove,
 + .driver = {
 + .name = panel_dpi,
 + .owner = THIS_MODULE,
 + .pm = panel_dpi_dev_pm_ops,
 + },
 +};

I'm not sure of how the free/release works. The release func is called
when the ref count drops to zero. But... The object in question, the
panel_dpi struct which contains the display entity, is not only about
data, it's also about code located in this module.

So I don't see anything preventing from unloading this module, while
some other component is holding a ref for the display entity. While its
holding the ref, it's valid to call ops in the display entity, but the
code for the ops in this module is already unloaded.

I don't really know how the kref can be used properly in this use case...

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC v2 1/5] video: Add generic display entity core

2012-11-27 Thread Tomi Valkeinen
Hi,

On 2012-11-22 23:45, Laurent Pinchart wrote:
 +/**
 + * display_entity_get_modes - Get video modes supported by the display entity
 + * @entity The display entity
 + * @modes: Pointer to an array of modes
 + *
 + * Fill the modes argument with a pointer to an array of video modes. The 
 array
 + * is owned by the display entity.
 + *
 + * Return the number of supported modes on success (including 0 if no mode is
 + * supported) or a negative error code otherwise.
 + */
 +int display_entity_get_modes(struct display_entity *entity,
 +  const struct videomode **modes)
 +{
 + if (!entity-ops.ctrl || !entity-ops.ctrl-get_modes)
 + return 0;
 +
 + return entity-ops.ctrl-get_modes(entity, modes);
 +}
 +EXPORT_SYMBOL_GPL(display_entity_get_modes);
 +
 +/**
 + * display_entity_get_size - Get display entity physical size
 + * @entity: The display entity
 + * @width: Physical width in millimeters
 + * @height: Physical height in millimeters
 + *
 + * When applicable, for instance for display panels, retrieve the display
 + * physical size in millimeters.
 + *
 + * Return 0 on success or a negative error code otherwise.
 + */
 +int display_entity_get_size(struct display_entity *entity,
 + unsigned int *width, unsigned int *height)
 +{
 + if (!entity-ops.ctrl || !entity-ops.ctrl-get_size)
 + return -EOPNOTSUPP;
 +
 + return entity-ops.ctrl-get_size(entity, width, height);
 +}
 +EXPORT_SYMBOL_GPL(display_entity_get_size);

How do you envision these to be used with, say, DVI monitors with EDID
data? Should each panel driver, that manages a device with EDID, read
and parse the EDID itself? I guess that shouldn't be too difficult with
a common EDID lib, but that will only expose some of the information
found from EDID. Should the upper levels also have a way to get the raw
EDID data, in addition to funcs like above?

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCHv15 2/7] video: add display_timing and videomode

2012-11-26 Thread Tomi Valkeinen
On 2012-11-26 11:07, Steffen Trumtrar wrote:

 +/*
 + * Subsystem independent description of a videomode.
 + * Can be generated from struct display_timing.
 + */
 +struct videomode {
 + u32 pixelclock; /* pixelclock in Hz */

I don't know if this is of any importance, but the linux clock framework
manages clock rates with unsigned long. Would it be better to use the
same type here?

 + u32 hactive;
 + u32 hfront_porch;
 + u32 hback_porch;
 + u32 hsync_len;
 +
 + u32 vactive;
 + u32 vfront_porch;
 + u32 vback_porch;
 + u32 vsync_len;
 +
 + u32 hah;/* hsync active high */
 + u32 vah;/* vsync active high */
 + u32 de; /* data enable */
 + u32 pixelclk_pol;

What values will these 4 have? Aren't these booleans?

The data enable comment should be data enable active high.

The next patch says in the devtree binding documentation that the values
may be on/off/ignored. Is that represented here somehow, i.e. values are
0/1/2 or so? If not, what does it mean if the value is left out from
devtree data, meaning not used on hardware?

There's also a doubleclk value mentioned in the devtree bindings doc,
but not shown here.

I think this videomode struct is the one that display drivers are going
to use (?), in addition to the display_timing, so it would be good to
document it well.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [PATCHv15 3/7] video: add of helper for display timings/videomode

2012-11-26 Thread Tomi Valkeinen
Hi,

On 2012-11-26 11:07, Steffen Trumtrar wrote:
 This adds support for reading display timings from DT into a struct
 display_timings. The of_display_timing implementation supports multiple
 subnodes. All children are read into an array, that can be queried.
 
 If no native mode is specified, the first subnode will be used.
 
 For cases where the graphics driver knows there can be only one
 mode description or where the driver only supports one mode, a helper
 function of_get_videomode is added, that gets a struct videomode from DT.
 
 Signed-off-by: Steffen Trumtrar s.trumt...@pengutronix.de
 Signed-off-by: Philipp Zabel p.za...@pengutronix.de
 Acked-by: Stephen Warren swar...@nvidia.com
 Reviewed-by: Thierry Reding thierry.red...@avionic-design.de
 Acked-by: Thierry Reding thierry.red...@avionic-design.de
 Tested-by: Thierry Reding thierry.red...@avionic-design.de
 Tested-by: Philipp Zabel p.za...@pengutronix.de
 Reviewed-by: Laurent Pinchart laurent.pinch...@ideasonboard.com
 Acked-by: Laurent Pinchart laurent.pinch...@ideasonboard.com
 ---
  .../devicetree/bindings/video/display-timing.txt   |  107 ++
  drivers/video/Kconfig  |   15 ++
  drivers/video/Makefile |2 +
  drivers/video/of_display_timing.c  |  219 
 
  drivers/video/of_videomode.c   |   54 +
  include/linux/of_display_timing.h  |   20 ++
  include/linux/of_videomode.h   |   18 ++
  7 files changed, 435 insertions(+)
  create mode 100644 Documentation/devicetree/bindings/video/display-timing.txt
  create mode 100644 drivers/video/of_display_timing.c
  create mode 100644 drivers/video/of_videomode.c
  create mode 100644 include/linux/of_display_timing.h
  create mode 100644 include/linux/of_videomode.h
 
 diff --git a/Documentation/devicetree/bindings/video/display-timing.txt 
 b/Documentation/devicetree/bindings/video/display-timing.txt
 new file mode 100644
 index 000..e238f27
 --- /dev/null
 +++ b/Documentation/devicetree/bindings/video/display-timing.txt
 @@ -0,0 +1,107 @@
 +display-timing bindings
 +===
 +
 +display-timings node
 +
 +
 +required properties:
 + - none
 +
 +optional properties:
 + - native-mode: The native mode for the display, in case multiple modes are
 + provided. When omitted, assume the first node is the native.
 +
 +timing subnode
 +--
 +
 +required properties:
 + - hactive, vactive: display resolution
 + - hfront-porch, hback-porch, hsync-len: horizontal display timing parameters
 +   in pixels
 +   vfront-porch, vback-porch, vsync-len: vertical display timing parameters 
 in
 +   lines
 + - clock-frequency: display clock in Hz
 +
 +optional properties:
 + - hsync-active: hsync pulse is active low/high/ignored
 + - vsync-active: vsync pulse is active low/high/ignored
 + - de-active: data-enable pulse is active low/high/ignored
 + - pixelclk-inverted: pixelclock is inverted (active on falling edge)/
 + non-inverted (active on rising edge)/
 +  ignored (ignore property)

I think hsync-active and vsync-active are clear, and commonly used, and
they are used for both drm and fb mode conversions in later patches.

de-active is not used in drm and fb mode conversions, but I think it's
also clear.

pixelclk-inverted is not used in the mode conversions. It's also a bit
unclear to me. What does it mean that pix clock is active on rising
edge? The pixel data is driven on rising edge? How about the sync
signals and DE, when are they driven? Does your HW have any settings
related to those?

OMAP has the invert pclk setting, but it also has a setting to define
when the sync signals are driven. The options are:
- syncs are driven on rising edge of pclk
- syncs are driven on falling edge of pclk
- syncs are driven on the opposite edge of pclk compared to the pixel data

For DE there's no setting, except the active high/low.

And if I'm not mistaken, if the optional properties are not defined,
they are not ignored, but left to the default 0. Which means active low,
or active on rising edge(?). I think it would be good to have a
undefined value for the properties.

 + - interlaced (bool): boolean to enable interlaced mode
 + - doublescan (bool): boolean to enable doublescan mode
 + - doubleclk (bool)

As I mentioned in the other mail, doubleclk is not used nor documented here.

 +All the optional properties that are not bool follow the following logic:
 +1: high active
 +0: low active
 +omitted: not used on hardware
 +
 +There are different ways of describing the capabilities of a display. The 
 devicetree
 +representation corresponds to the one commonly found in datasheets for 
 displays.
 +If a display supports multiple signal timings, the native-mode can be 
 specified.

I have some of the same concerns for this series than with 

Re: [PATCHv15 3/7] video: add of helper for display timings/videomode

2012-11-26 Thread Tomi Valkeinen
On 2012-11-26 18:10, Steffen Trumtrar wrote:
 Hi,
 
 On Mon, Nov 26, 2012 at 04:38:36PM +0200, Tomi Valkeinen wrote:

 +optional properties:
 + - hsync-active: hsync pulse is active low/high/ignored
 + - vsync-active: vsync pulse is active low/high/ignored
 + - de-active: data-enable pulse is active low/high/ignored
 + - pixelclk-inverted: pixelclock is inverted (active on falling edge)/
 +   non-inverted (active on rising edge)/
 +ignored (ignore property)

 I think hsync-active and vsync-active are clear, and commonly used, and
 they are used for both drm and fb mode conversions in later patches.

 de-active is not used in drm and fb mode conversions, but I think it's
 also clear.

 pixelclk-inverted is not used in the mode conversions. It's also a bit
 unclear to me. What does it mean that pix clock is active on rising
 edge? The pixel data is driven on rising edge? How about the sync
 signals and DE, when are they driven? Does your HW have any settings
 related to those?

 
 Those are properties commonly found in display specs. That is why they are 
 here.
 If the GPU does not support the property it can be omitted.

So what does the pixelclk-inverted mean? Normally the SoC drives pixel
data on rising edge, and the panel samples it at falling edge? And
vice-versa for inverted? Or the other way around?

When is hsync/vsync set? On rising or falling edge of pclk?

My point here is that the pixelclk-inverted is not crystal clear thing,
like the hsync/vsync/de-active values are.

And while thinking about this, I realized that the meaning of
pixelclk-inverted depends on what component is it applied to. Presuming
normal pixclk means pixel data on rising edge, the meaning of that
depends on do we consider the SoC or the panel. The panel needs to
sample the data on the other edge from the one the SoC uses to drive the
data.

Does the videomode describe the panel, or does it describe the settings
programmed to the SoC?

 OMAP has the invert pclk setting, but it also has a setting to define
 when the sync signals are driven. The options are:
 - syncs are driven on rising edge of pclk
 - syncs are driven on falling edge of pclk
 - syncs are driven on the opposite edge of pclk compared to the pixel data

 For DE there's no setting, except the active high/low.

 And if I'm not mistaken, if the optional properties are not defined,
 they are not ignored, but left to the default 0. Which means active low,
 or active on rising edge(?). I think it would be good to have a
 undefined value for the properties.

 
 Yes. As mentioned in my other mail, the intention of the omitted properties do
 not propagate properly. Omitted must be a value  0, so it is clear in a later
 stage, that this property shall not be used. And isn't unintentionally 
 considered
 to be active low.

Ok. Just note that the values are currently stored into u32, and I don't
think using negative error values with u32 is a good idea.

 I have some of the same concerns for this series than with the
 interpreted power sequences (on fbdev): when you publish the DT
 bindings, it's somewhat final version then, and fixing it later will be
 difficult. Of course, video modes are much clearer than the power
 sequences, so it's possible there won't be any problems with the DT
 bindings.

 
 The binding is pretty much at the bare minimum after a lot of discussion about
 the properties. Even if the binding changes, I think it will rather grow than
 shrink. Take the doubleclock property for example. It got here mistakingly,
 because we had a display that has this feature.

Right. That's why I would leave the pixelclock-inverted out for now, if
we're not totally sure how it's defined. Of course, it could be just me
who is not understanding the pixclk-inverted =).

 However, I'd still feel safer if the series would be split to non-DT and
 DT parts. The non-DT parts could be merged quite easily, and people
 could start using them in their kernels. This should expose
 bugs/problems related to the code.

 The DT part could be merged later, when there's confidence that the
 timings are good for all platforms.

 Or, alternatively, all the non-common bindings (de-active, pck
 invert,...) that are not used for fbdev or drm currently could be left
 out for now. But I'd stil prefer merging it in two parts.
 
 I don't say that I'm against it, but the whole reason for the series was
 getting the display timings from a DT into a graphics driver. And I think
 I remember seeing some other attempts at achieving this, but all specific
 to one special case. There is even already a mainline driver that uses an 
 older
 version of the DT bindings (vt8500-fb).

I think it'd be very useful even without DT bindings. But yes, I
understand your need for it.

You're now in v15 of the series. Are you sure v16 will be good enough to
freeze the DT bindings, if 15 previous versions weren't? =). Perhaps I'm
just overly cautious with DT

Re: [RFC v2 0/5] Common Display Framework

2012-11-23 Thread Tomi Valkeinen
Hi,

On 2012-11-22 23:45, Laurent Pinchart wrote:
 From: Laurent Pinchart laurent.pinchart+rene...@ideasonboard.com
 
 Hi everybody,
 
 Here's the second RFC of what was previously known as the Generic Panel
 Framework.

Nice work! Thanks for working on this.

I was doing some testing with the code, seeing how to use it in omapdss.
Here are some thoughts:

In your model the DSS gets the panel devices connected to it from
platform data. After the DSS and the panel drivers are loaded, DSS gets
a notification and connects DSS and the panel.

I think it's a bit limited way. First of all, it'll make the DT data a
bit more complex (although this is not a major problem). With your
model, you'll need something like:

soc-base.dtsi:

dss {
dpi0: dpi {
};
};

board.dts:

dpi0 {
panel = dpi-panel;
};

/ {
dpi-panel: dpi-panel {
...panel data...;
};
};

Second, it'll prevent hotplug, and even if real hotplug would not be
supported, it'll prevent cases where the connected panel must be found
dynamically (like reading ID from eeprom).

Third, it kinda creates a cyclical dependency: the DSS needs to know
about the panel and calls ops in the panel, and the panel calls ops in
the DSS. I'm not sure if this is an actual problem, but I usually find
it simpler if calls are done only in one direction.


What I suggest is take a simpler approach, something alike to how
regulators or gpios are used, even if slightly more complex than those:
the entity that has a video output (SoC's DSS, external chips) offers
that video output as resource. It doesn't know or care who uses it. The
user of the video output (panel, external chips) will find the video
output (to which it is connected in the HW) by some means, and will use
different operations on that output to operate the device.

This would give us something like the following DT data:

soc-base.dtsi:

dss {
dpi0: dpi {
};
};

board.dts:

/ {
dpi-panel: dpi-panel {
source = dpi0;
...panel data...;
};
};

The panel driver would do something like this in its probe:

int dpi_panel_probe()
{
// Find the video source, increase ref
src = get_video_source_from_of(source);

// Reserve the video source for us. others can still get and
// observe it, but cannot use it as video data source.
// I think this should cascade upstream, so that after this call
// each video entity from the panel to the SoC's CRTC is
// reserved and locked for this video pipeline.
reserve_video_source(src);

// set DPI HW configuration, like DPI data lines. The
// configuration would come from panel's platform data
set_dpi_config(src, config);

// register this panel as a display.
register_display(this);
}


The DSS's dpi driver would do something like:

int dss_dpi_probe()
{
// register as a DPI video source
register_video_source(this);
}

A DSI-2-DPI chip would do something like:

int dsi2dpi_probe()
{
// get, reserve and config the DSI bus from SoC
src = get_video_source_from_of(source);
reserve_video_source(src);
set_dsi_config(src, config);

// register as a DPI video source
register_video_source(this);
}


Here we wouldn't have similar display_entity as you have, but video
sources and displays. Video sources are elements in the video pipeline,
and a video source is used only by the next downstream element. The last
element in the pipeline would not be a video source, but a display,
which would be used by the upper layer.

Video source's ops would deal with things related to the video bus in
question, like configuring data lanes, sending DSI packets, etc. The
display ops would be more high level things, like enable, update, etc.
Actually, I guess you could consider the display to represent and deal
with the whole pipeline, while video source deals with the bus between
two display entities.

 Tomi




signature.asc
Description: OpenPGP digital signature


Re: [RFC v2 0/5] Common Display Framework

2012-11-23 Thread Tomi Valkeinen
On 2012-11-23 21:56, Thierry Reding wrote:
 On Thu, Nov 22, 2012 at 10:45:31PM +0100, Laurent Pinchart wrote:
 [...]
 Display entities are accessed by driver using notifiers. Any driver can
 register a display entity notifier with the CDF, which then calls the 
 notifier
 when a matching display entity is registered. The reason for this 
 asynchronous
 mode of operation, compared to how drivers acquire regulator or clock
 resources, is that the display entities can use resources provided by the
 display driver. For instance a panel can be a child of the DBI or DSI bus
 controlled by the display device, or use a clock provided by that device. We
 can't defer the display device probe until the panel is registered and also
 defer the panel device probe until the display is registered. As most display
 drivers need to handle output devices hotplug (HDMI monitors for instance),
 handling other display entities through a notification system seemed to be 
 the
 easiest solution.

 Note that this brings a different issue after registration, as display
 controller and display entity drivers would take a reference to each other.
 Those circular references would make driver unloading impossible. One 
 possible
 solution to this problem would be to simulate an unplug event for the display
 entity, to force the display driver to release the dislay entities it uses. 
 We
 would need a userspace API for that though. Better solutions would of course
 be welcome.
 
 Maybe I don't understand all of the underlying issues correctly, but a
 parent/child model would seem like a better solution to me. We discussed
 this back when designing the DT bindings for Tegra DRM and came to the
 conclusion that the output resource of the display controller (RGB,
 HDMI, DSI or TVO) was the most suitable candidate to be the parent of
 the panel or display attached to it. The reason for that decision was
 that it keeps the flow of data or addressing of nodes consistent. So the
 chain would look something like this (on Tegra):
 
   CPU
   +-host1x
 +-dc
   +-rgb
   | +-panel
   +-hdmi
 +-monitor
 
 In a natural way this makes the output resource the master of the panel
 or display. From a programming point of view this becomes quite easy to
 implement and is very similar to how other busses like I2C or SPI are
 modelled. In device tree these would be represented as subnodes, while
 with platform data some kind of lookup could be done like for regulators
 or alternatively a board setup registration mechanism like what's in
 place for I2C or SPI.

You didn't explicitly say it, but I presume you are talking about the
device model for panels, not just how to refer to the outputs.

How would you deal with a, say, DPI panel that is controlled via I2C or
SPI? You can have the panel device be both a panel device, child of a
RGB output, and an i2c device.

The model you propose is currently used in omapdss, and while it seems
simple and logical, it's not that simple with panels/chips with separate
control and data busses.

I think it makes more sense to consider the device as a child of the
control bus. So a DPI panel controlled via I2C is an I2C device, and it
just happens to use a DPI video output as a resource (like it could use
a regulator, gpio, etc).

 Tomi




signature.asc
Description: OpenPGP digital signature


  1   2   >