This is a note to let you know that I've just added the patch titled drm/i915: Only run idle processing from i915_gem_retire_requests_worker
to the 3.8-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: drm-i915-only-run-idle-processing-from-i915_gem_retire_requests_worker.patch and it can be found in the queue-3.8 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@vger.kernel.org> know about it. >From 725a5b54028916cd2511a251c5b5b13d1715addc Mon Sep 17 00:00:00 2001 From: Chris Wilson <ch...@chris-wilson.co.uk> Date: Tue, 8 Jan 2013 11:02:57 +0000 Subject: drm/i915: Only run idle processing from i915_gem_retire_requests_worker From: Chris Wilson <ch...@chris-wilson.co.uk> commit 725a5b54028916cd2511a251c5b5b13d1715addc upstream. When adding the fb idle detection to mark-inactive, it was forgotten that userspace can drive the processing of retire-requests. We assumed that it would be principally driven by the retire requests worker, running once every second whilst active and so we would get the deferred timer for free. Instead we spend too many CPU cycles reclocking the LVDS preventing real work from being done. Signed-off-by: Chris Wilson <ch...@chris-wilson.co.uk> Reported-and-tested-by: Alexander Lam <lambchop...@gmail.com> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=58843 Reviewed-by: Rodrigo Vivi <rodrigo.v...@gmail.com> Signed-off-by: Daniel Vetter <daniel.vet...@ffwll.ch> Signed-off-by: Greg Kroah-Hartman <gre...@linuxfoundation.org> --- drivers/gpu/drm/i915/i915_gem.c | 3 --- drivers/gpu/drm/i915/intel_display.c | 12 +++--------- drivers/gpu/drm/i915/intel_drv.h | 3 +-- 3 files changed, 4 insertions(+), 14 deletions(-) --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -1918,9 +1918,6 @@ i915_gem_object_move_to_inactive(struct BUG_ON(obj->base.write_domain & ~I915_GEM_GPU_DOMAINS); BUG_ON(!obj->active); - if (obj->pin_count) /* are we a framebuffer? */ - intel_mark_fb_idle(obj); - list_move_tail(&obj->mm_list, &dev_priv->mm.inactive_list); list_del_init(&obj->ring_list); --- a/drivers/gpu/drm/i915/intel_display.c +++ b/drivers/gpu/drm/i915/intel_display.c @@ -6993,11 +6993,6 @@ void intel_mark_busy(struct drm_device * void intel_mark_idle(struct drm_device *dev) { -} - -void intel_mark_fb_busy(struct drm_i915_gem_object *obj) -{ - struct drm_device *dev = obj->base.dev; struct drm_crtc *crtc; if (!i915_powersave) @@ -7007,12 +7002,11 @@ void intel_mark_fb_busy(struct drm_i915_ if (!crtc->fb) continue; - if (to_intel_framebuffer(crtc->fb)->obj == obj) - intel_increase_pllclock(crtc); + intel_decrease_pllclock(crtc); } } -void intel_mark_fb_idle(struct drm_i915_gem_object *obj) +void intel_mark_fb_busy(struct drm_i915_gem_object *obj) { struct drm_device *dev = obj->base.dev; struct drm_crtc *crtc; @@ -7025,7 +7019,7 @@ void intel_mark_fb_idle(struct drm_i915_ continue; if (to_intel_framebuffer(crtc->fb)->obj == obj) - intel_decrease_pllclock(crtc); + intel_increase_pllclock(crtc); } } --- a/drivers/gpu/drm/i915/intel_drv.h +++ b/drivers/gpu/drm/i915/intel_drv.h @@ -440,9 +440,8 @@ extern bool intel_sdvo_init(struct drm_d extern void intel_dvo_init(struct drm_device *dev); extern void intel_tv_init(struct drm_device *dev); extern void intel_mark_busy(struct drm_device *dev); -extern void intel_mark_idle(struct drm_device *dev); extern void intel_mark_fb_busy(struct drm_i915_gem_object *obj); -extern void intel_mark_fb_idle(struct drm_i915_gem_object *obj); +extern void intel_mark_idle(struct drm_device *dev); extern bool intel_lvds_init(struct drm_device *dev); extern void intel_dp_init(struct drm_device *dev, int output_reg, enum port port); Patches currently in stable-queue which might be from ch...@chris-wilson.co.uk are queue-3.8/drm-i915-only-run-idle-processing-from-i915_gem_retire_requests_worker.patch -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html