On 2018-06-27 11:39 AM, Emil Velikov wrote:
> On 27 June 2018 at 09:40, Michel Dänzer <mic...@daenzer.net> wrote:
>> On 2018-06-26 07:11 PM, Emil Velikov wrote:
>>> On 26 June 2018 at 17:23, Michel Dänzer <mic...@daenzer.net> wrote:
>>>> On 2018-06-26 05:43 PM, Emil Velikov wrote:
>>>>> On 25 June 2018 at 22:45, Zuo, Jerry <jerry....@amd.com> wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> We are working on an issue affecting 4K@60 HDMI display not to light up, 
>>>>>> but
>>>>>> only showing up 4K@30 from:
>>>>>> https://bugs.freedesktop.org/show_bug.cgi?id=106959 and others.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Some displays (e.g., ASUS PA328) HDMI port shows YCbCr420 CEA extension
>>>>>> block with 4K@60 supported. Such HDMI 4K@60 is not real HDMI 2.0, but 
>>>>>> still
>>>>>> following HDMI 1.4 spec. with maximum TMDS clock of 300MHz instead of
>>>>>> 600MHz.
>>>>>>
>>>>>> To get such 4K@60 supported, it needs to limit the bandwidth by reducing 
>>>>>> the
>>>>>> color space to YCbCr420 only. We’ve already raised YCbCr420 only flag
>>>>>> (attached patch) from kernel side to pass the mode validation, and 
>>>>>> expose it
>>>>>> to user space.
>>>>>>
>>>>>>
>>>>>>
>>>>>>     We think that one of the issues that causes this problem is due to
>>>>>> usermode pruning the 4K@60 mode from the modelist (attached Xorg.0.log). 
>>>>>> It
>>>>>> seems like when usermode receives all the modes, it doesn't take in 
>>>>>> account
>>>>>> the 4K@60 YCbCr4:2:0 specific mode. In order to pass validation of being
>>>>>> added to usermode modelist, its pixel clk needs to be divided by 2 so 
>>>>>> that
>>>>>> it won't exceed TMDS max physical pixel clk (300MHz). That might explain 
>>>>>> the
>>>>>> difference in modes between our usermode and modeset.
>>>>>>
>>>>>>
>>>>>>
>>>>>>     Such YCbCr4:2:0 4K@60 special mode is marked in DRM by raising a flag
>>>>>> (y420_vdb_modes) inside connector's display_info which can be seen in
>>>>>> do_y420vdb_modes(). Usermode could rely on that flag to pick up such mode
>>>>>> and halve the required pclk to prevent such mode getting pruned out.
>>>>>>
>>>>>>
>>>>>>
>>>>>> We were hoping for someone helps to look at it from usermode perspective.
>>>>>> Thanks a lot.
>>>>>>
>>>>> Just some observations, while going through some coffee. Take them
>>>>> with a pinch of salt.
>>>>>
>>>>> Currently the kernel edid parser (in drm core) handles the
>>>>> EXT_VIDEO_DATA_BLOCK_420 extended block.
>>>>> Additionally, the kernel allows such modes only as the (per connector)
>>>>> ycbcr_420_allowed bool is set by the driver.
>>>>>
>>>>> Quick look shows that it's only enabled by i915 on gen10 && geminilake 
>>>>> hardware.
>>>>>
>>>>> At the same time, X does it's own fairly partial edid parsing and
>>>>> doesn't handle any(?) extended blocks.
>>>>>
>>>>> One solution is to update the X parser, although it seems like a
>>>>> endless game of cat and mouse.
>>>>> IMHO a much better approach is to not use edid codepaths for KMS
>>>>> drivers (where AMDGPU is one).
>>>>> On those, the supported modes is advertised by the kernel module via
>>>>> drmModeGetConnector.
>>>>
>>>> We are getting the modes from the kernel; the issue is they are then
>>>> pruned (presumably by xf86ProbeOutputModes => xf86ValidateModesClocks)
>>>> due to violating the clock limits, as described by Jerry above.
>>>>
>>> Might have been too brief there. Here goes a more elaborate
>>> suggestion, please point out any misunderstandings.
>>>
>>> If we look into the drivers we'll see a call to xf86InterpretEDID(),
>>> followed by xf86OutputSetEDID().
>>> The former doing a partial parsing of the edid, creating a xf86MonPtr
>>> (timings information et al.) with the latter attaching it to the
>>> output.
>>>
>>> Thus as we get into xf86ProbeOutputModes/xf86ValidateModesClocks the
>>> Xserver probes the mode against given timing/bandwidth constrains,
>>> discarding it where applicable.
>>>
>>> Considering that the DRM driver already does similar checks, X could
>>> side-step the parsing and filtering/validation all together.
>>> Trusting the kernel should be reasonable, considering weston (and I
>>> would imagine other wayland compositors) already do so.
>>
>> It's still not clear to me what exactly you're proposing. Maybe you can
>> whip up at least a mock-up patch?
>>
>>
> I don't have much time to tinker with it, hopefully the following
> proposal will be clear enough. If not perhaps I'll get to in at some
> point.

I'm afraid it's still rather vague to me at least. Anyway, since this is
a (at least behavioural) regression in the 4.18 kernel cycle, it cannot
be solved with userspace changes, especially not any involving xserver.
A solution is needed for the final 4.18 release which works with current
userspace.


Taking a step back, it's not clear to me why the kernel change bisected
in the bug report makes a difference as to whether Xorg prunes the 60 Hz
4K modes or not. EDID is identical in both cases, right? So does the
kernel report a different clock for the modes now, or what exactly has
changed that is visible to userspace?

If it's the reported pixel clock, maybe the kernel could continue
reporting the pixel clock based on the minimum supported colour depth,
and later choose the colour depth according to the available bandwidth /
validate the actual clock based on the colour depth requested by userspace?


-- 
Earthling Michel Dänzer               |               http://www.amd.com
Libre software enthusiast             |             Mesa and X developer
_______________________________________________
xorg-devel@lists.x.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: https://lists.x.org/mailman/listinfo/xorg-devel

Reply via email to