On Thu, Oct 13, 2016 at 04:41:01PM +0100, Chris Wilson wrote:
> On Thu, Oct 13, 2016 at 05:28:13PM +0200, Daniel Vetter wrote:
> > On Thu, Oct 13, 2016 at 04:25:18PM +0100, Chris Wilson wrote:
> > > On Thu, Oct 13, 2016 at 05:10:21PM +0200, Daniel Vetter wrote:
> > > > On Wed, Oct 12, 2016 at 12:16:33PM +0100, Chris Wilson wrote:
> > > > > @@ -379,10 +389,17 @@ void i915_gem_restore_fences(struct drm_device 
> > > > > *dev)
> > > > >                * Commit delayed tiling changes if we have an object 
> > > > > still
> > > > >                * attached to the fence, otherwise just clear the 
> > > > > fence.
> > > > >                */
> > > > > -             if (vma && !i915_gem_object_is_tiled(vma->obj))
> > > > > +             if (vma && !i915_gem_object_is_tiled(vma->obj)) {
> > > > > +                     GEM_BUG_ON(!reg->dirty);
> > > > > +                     
> > > > > GEM_BUG_ON(!list_empty(&vma->obj->userfault_link));
> > > > > +
> > > > > +                     list_move(&reg->link, &dev_priv->mm.fence_list);
> > > > > +                     vma->fence = NULL;
> > > > >                       vma = NULL;
> > > > > +             }
> > > > >  
> > > > > -             fence_update(reg, vma);
> > > > > +             fence_write(reg, vma);
> > > > > +             reg->vma = vma;
> > > > 
> > > > Same comments as with the userfault_list: Using rpm ordering to enforce
> > > > consistency causes mild panic attacks here with me ;-)
> > > > 
> > > > Is the above (delayed tiling change commit) even possible here, at least
> > > > for rpm resume? Same for system s/r (both s3 and s4) since the 
> > > > pagetables
> > > > won't survive anyway. Can't we simply make this an impossibility?
> > > 
> > > We also use this from reset to rewrite the old fences, and we know there
> > > we can hit the delayed fence write [4fc788f5ee3d]. It would also be
> > > possible to hit it on suspend as well.
> > > 
> > > I've been thinking about whether we should be bothering to write the
> > > fence registers with the correct value or just cancel the fences. But we
> > > have to restore anything that is pinned, and we have to write something
> > > into the fences (just to be safe), and if we have to write something we
> > > may as well use the most recent information we have as that has a good
> > > chance of being used again.
> > > 
> > > Long story short, I don't have a better idea for restoring or avoiding
> > > the restore of fences.
> > 
> > What about a rpm_resume only version that just does a blind fence_write?
> > It is something, and we can update the book-keeping once we do get to one
> > of the real synchronization points again.
> > 
> > With that we can leave the versions for reset and system s/r alone ... Or
> > is there trickery even with rpm going on?
> 
> For rpm suspend, we only zap the user's mmap and not mark the fence as
> lost. I think that's the missing piece as to why this is not as simple as
> it could be for rpm-resume. On rpm-resume we only need to restore pinned
> fences, and fences should only be pinned for hw access, and so there
> should never be any if we were rpm-suspended. (Assuming that all pinned
> fences are active, which on the surface seems a reasonable assumption.)
> 
> If that holds true, we do not need this at all for runtime pm (we still
> need it across full system suspend/reset) and just need to doctor the
> existing scary i915_gem_release_all_mmaps() (aka
> i915_gem_runtime_suspend()!) to both release the mmap and throw away the
> fence tracing. At least then we only have one dragon nest.

Yeah, would be great to unify this stuff a bit ... Making fence semantics
the same for rpm suspend as for normal suspend would definitely be nice,
and the hw will forget about the registers anyway.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to