On Fri, 30 May 2014 19:56:22 +0100
Chris Wilson <ch...@chris-wilson.co.uk> wrote:

> On Fri, May 30, 2014 at 11:05:21AM -0700, Jesse Barnes wrote:
> > +static void intel_queue_crtc_enable(struct drm_crtc *crtc)
> > +{
> > +   struct drm_device *dev = crtc->dev;
> > +   struct drm_i915_private *dev_priv = dev->dev_private;
> > +   struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
> > +   struct intel_crtc_work *work;
> > +
> > +   WARN(!mutex_is_locked(&dev->mode_config.mutex),
> > +        "need mode_config mutex\n");
> > +
> > +   work = kmalloc(sizeof(*work), GFP_KERNEL);
> > +   if (!work) {
> > +           dev_priv->display._crtc_disable(&intel_crtc->base);
> > +           return;
> > +   }
> > +
> > +   work->enable = true;
> > +   work->intel_crtc = intel_crtc;
> > +   INIT_LIST_HEAD(&work->head);
> (redundant, list_add doesn't care)

Will fix.

> > +
> > +   list_add_tail(&dev_priv->crtc_work_queue, &work->head);
> > +   schedule_work(&dev_priv->crtc_work);
> > +}
> 
> If we tracked one queued item per crtc, we could avoid the allocation
> and allow for elision of pending operations.

Yeah I thought about that too, might make for a good optimization, but
I figured this was simplest to start with.

-- 
Jesse Barnes, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to