On Mon, Jun 16, 2014 at 02:41:15AM -0600, Steven Wilson wrote:
> >Synopsis:    inteldrm doesn't allow 5.7 Gbps modes on DisplayPort
> >Category:    kernel
> >Environment:
>       System      : OpenBSD 5.5
>       Details     : OpenBSD 5.5-current (GENERIC) #1: Mon Jun 16 07:29:22 MDT 
> 2014
>                        
> r...@steven-laptop.home.lan:/usr/src/sys/arch/amd64/compile/GENERIC
> 
>       Architecture: OpenBSD.amd64
>       Machine     : amd64
> >Description:
>       On my Lenovo IdeaPad Yoga 2 Pro laptop, inteldrm turns off the
>       panel when it loads. This appears to be caused by the mode
>       filtering code in the driver rejecting the modes and then
>       continuing as though no display is attached to the internal
>       eDP port. This in turn appears to be caused by the mode
>       filtering code treating 5.4 Gbps DP links as though they were
>       2.7 Gbps links, and thus too slow to support the panel's modes.
> >How-To-Repeat:
>       Install OpenBSD 5.5 or a recent snapshot on a Lenovo IdeaPad
>       Yoga 2 Pro and boot it. Wait until the inteldrm driver kicks
>       in and the panel turns off. Reboot and disable inteldrm*
>       in UKC. Observe that the panel stays active with inteldrm
>       disabled.
> >Fix:
>       The most basic workaround is to disable inteldrm* in UKC, but
>       this makes X pretty unusable. An external display should work,
>       but I don't have the right cable in the right place to test
>       that just now.
> 
>       The following patch makes the mode filtering code take 5.4 Gbps
>       links into account. With it applied, the panel is usable. I am
>       a little concerned about the error messages that appear (see
>       the second dmesg), but I don't see any obvious malfunction
>       associated with them.

Thanks for the report and the analysis.  I'd prefer to use
the linux commits around this if at all possible.

Can you try the patch below that is a combination of two commits:

commit d4eead50eb206b875f54f66cc0f6ec7d54122c28
Author: Imre Deak <imre.d...@intel.com>
Date:   Tue Jul 9 17:05:26 2013 +0300

    drm/i915: fix lane bandwidth capping for DP 1.2 sinks

    DP 1.2 compatible displays may report a 5.4Gbps maximum bandwidth which
    the driver will treat as an invalid value and use 1.62Gbps instead. Fix
    this by capping to 2.7Gbps for sinks reporting a 5.4Gbps max bw.

    Also add a warning for reserved values.

    v2:
    - allow only bw values explicitly listed in the DP standard (Daniel,
      Chris)

    Signed-off-by: Imre Deak <imre.d...@intel.com>
    Signed-off-by: Daniel Vetter <daniel.vet...@ffwll.ch>

commit 9fa5f6522e6eecb5ab20192a264a29ba4f2f4e85
Author: Paulo Zanoni <paulo.r.zan...@intel.com>
Date:   Thu Nov 29 11:31:29 2012 -0200

    drm/i915: kill intel_dp_link_clock()
   
    Use drm_dp_bw_code_to_link_rate insead. It's the same thing, but
    supports DP_LINK_BW_5_4 and is also used by the other drivers.

    Signed-off-by: Paulo Zanoni <paulo.r.zan...@intel.com>
    Signed-off-by: Daniel Vetter <daniel.vet...@ffwll.ch>


If that doesn't work we could then look into:

commit 9bbfd20abe5025adbb0ac75160bd2e41158a9e83
Author: Paulo Zanoni <paulo.r.zan...@intel.com>
Date:   Tue Apr 29 11:00:22 2014 -0300

    drm/i915: don't try DP_LINK_BW_5_4 on HSW ULX

    Because the docs say ULX doesn't support it on HSW.

    Reviewed-by: Dave Airlie <airl...@redhat.com>
    Signed-off-by: Paulo Zanoni <paulo.r.zan...@intel.com>
    Signed-off-by: Jani Nikula <jani.nik...@intel.com>


commit 06ea66b6bb445043dc25a9626254d5c130093199
Author: Todd Previte <tprev...@gmail.com>
Date:   Mon Jan 20 10:19:39 2014 -0700

    drm/i915: Enable 5.4Ghz (HBR2) link rate for Displayport 1.2-capable devices

    For HSW+ platforms, enable the 5.4Ghz (HBR2) link rate for devices that 
support it. The
    sink device must report that is supports Displayport 1.2 and the HBR2 bit 
rate in the
    DPCD in order to use HBR2.

    Signed-off-by: Todd Previte <tprev...@gmail.com>
    Signed-off-by: Daniel Vetter <daniel.vet...@ffwll.ch>

The errors you see currently appear on all haswell machines.

Index: intel_dp.c
===================================================================
RCS file: /cvs/src/sys/dev/pci/drm/i915/intel_dp.c,v
retrieving revision 1.18
diff -u -p -r1.18 intel_dp.c
--- intel_dp.c  30 Mar 2014 01:10:36 -0000      1.18
+++ intel_dp.c  17 Jun 2014 04:49:17 -0000
@@ -139,22 +139,18 @@ intel_dp_max_link_bw(struct intel_dp *in
        case DP_LINK_BW_1_62:
        case DP_LINK_BW_2_7:
                break;
+       case DP_LINK_BW_5_4: /* 1.2 capable displays may advertise higher bw */
+               max_link_bw = DP_LINK_BW_2_7;
+               break;
        default:
+               WARN(1, "invalid max DP link bw val %x, using 1.62Gbps\n",
+                    max_link_bw);
                max_link_bw = DP_LINK_BW_1_62;
                break;
        }
        return max_link_bw;
 }
 
-static int
-intel_dp_link_clock(uint8_t link_bw)
-{
-       if (link_bw == DP_LINK_BW_2_7)
-               return 270000;
-       else
-               return 162000;
-}
-
 /*
  * The units on the numbers in the next two are... bizarre.  Examples will
  * make it clearer; this one parallels an example in the eDP spec.
@@ -189,7 +185,8 @@ intel_dp_adjust_dithering(struct intel_d
                          struct drm_display_mode *mode,
                          bool adjust_mode)
 {
-       int max_link_clock = 
intel_dp_link_clock(intel_dp_max_link_bw(intel_dp));
+       int max_link_clock =
+               drm_dp_bw_code_to_link_rate(intel_dp_max_link_bw(intel_dp));
        int max_lanes = drm_dp_max_lane_count(intel_dp->dpcd);
        int max_rate, mode_rate;
 
@@ -745,12 +742,15 @@ intel_dp_mode_fixup(struct drm_encoder *
 
        for (clock = 0; clock <= max_clock; clock++) {
                for (lane_count = 1; lane_count <= max_lane_count; lane_count 
<<= 1) {
-                       int link_avail = 
intel_dp_max_data_rate(intel_dp_link_clock(bws[clock]), lane_count);
+                       int link_bw_clock =
+                               drm_dp_bw_code_to_link_rate(bws[clock]);
+                       int link_avail = intel_dp_max_data_rate(link_bw_clock,
+                                                               lane_count);
 
                        if (mode_rate <= link_avail) {
                                intel_dp->link_bw = bws[clock];
                                intel_dp->lane_count = lane_count;
-                               adjusted_mode->clock = 
intel_dp_link_clock(intel_dp->link_bw);
+                               adjusted_mode->clock = link_bw_clock;
                                DRM_DEBUG_KMS("DP link bw %02x lane "
                                                "count %d clock %d bpp %d\n",
                                       intel_dp->link_bw, intel_dp->lane_count,

Reply via email to