On Thu, Jul 11, 2013 at 02:20:50PM +0300, Imre Deak wrote:
> On Mon, 2013-07-08 at 23:08 -0700, Ben Widawsky wrote:
> > Formerly: "drm/i915: Create VMAs (part 1)"
> > 
> > In a previous patch, the notion of a VM was introduced. A VMA describes
> > an area of part of the VM address space. A VMA is similar to the concept
> > in the linux mm. However, instead of representing regular memory, a VMA
> > is backed by a GEM BO. There may be many VMAs for a given object, one
> > for each VM the object is to be used in. This may occur through flink,
> > dma-buf, or a number of other transient states.
> > 
> > Currently the code depends on only 1 VMA per object, for the global GTT
> > (and aliasing PPGTT). The following patches will address this and make
> > the rest of the infrastructure more suited
> > 
> > v2: s/i915_obj/i915_gem_obj (Chris)
> > 
> > v3: Only move an object to the now global unbound list if there are no
> > more VMAs for the object which are bound into a VM (ie. the list is
> > empty).
> > 
> > v4: killed obj->gtt_space
> > some reworks due to rebase
> > 
> > Signed-off-by: Ben Widawsky <b...@bwidawsk.net>
> > ---
> >  drivers/gpu/drm/i915/i915_drv.h        | 48 ++++++++++++++++++++++------
> >  drivers/gpu/drm/i915/i915_gem.c        | 57 
> > +++++++++++++++++++++++++++++-----
> >  drivers/gpu/drm/i915/i915_gem_evict.c  | 12 ++++---
> >  drivers/gpu/drm/i915/i915_gem_gtt.c    |  5 +--
> >  drivers/gpu/drm/i915/i915_gem_stolen.c | 14 ++++++---
> >  5 files changed, 110 insertions(+), 26 deletions(-)
> > 
> > [...]
> > diff --git a/drivers/gpu/drm/i915/i915_gem.c 
> > b/drivers/gpu/drm/i915/i915_gem.c
> > index 525aa8f..058ad44 100644
> > --- a/drivers/gpu/drm/i915/i915_gem.c
> > +++ b/drivers/gpu/drm/i915/i915_gem.c
> > @@ -2578,6 +2578,7 @@ int
> >  i915_gem_object_unbind(struct drm_i915_gem_object *obj)
> >  {
> >     drm_i915_private_t *dev_priv = obj->base.dev->dev_private;
> > +   struct i915_vma *vma;
> >     int ret;
> >  
> >     if (!i915_gem_obj_ggtt_bound(obj))
> > @@ -2615,11 +2616,20 @@ i915_gem_object_unbind(struct drm_i915_gem_object 
> > *obj)
> >     i915_gem_object_unpin_pages(obj);
> >  
> >     list_del(&obj->mm_list);
> > -   list_move_tail(&obj->global_list, &dev_priv->mm.unbound_list);
> >     /* Avoid an unnecessary call to unbind on rebind. */
> >     obj->map_and_fenceable = true;
> >  
> > -   drm_mm_remove_node(&obj->gtt_space);
> > +   vma = __i915_gem_obj_to_vma(obj);
> > +   list_del(&vma->vma_link);
> > +   drm_mm_remove_node(&vma->node);
> > +   i915_gem_vma_destroy(vma);
> > +
> > +   /* Since the unbound list is global, only move to that list if
> > +    * no more VMAs exist.
> > +    * NB: Until we have real VMAs there will only ever be one */
> > +   WARN_ON(!list_empty(&obj->vma_list));
> > +   if (list_empty(&obj->vma_list))
> > +           list_move_tail(&obj->global_list, &dev_priv->mm.unbound_list);
> >  
> >     return 0;
> >  }
> > @@ -3070,8 +3080,12 @@ i915_gem_object_bind_to_gtt(struct 
> > drm_i915_gem_object *obj,
> >     bool mappable, fenceable;
> >     size_t gtt_max = map_and_fenceable ?
> >             dev_priv->gtt.mappable_end : dev_priv->gtt.base.total;
> > +   struct i915_vma *vma;
> >     int ret;
> >  
> > +   if (WARN_ON(!list_empty(&obj->vma_list)))
> > +           return -EBUSY;
> > +
> >     fence_size = i915_gem_get_gtt_size(dev,
> >                                        obj->base.size,
> >                                        obj->tiling_mode);
> > @@ -3110,9 +3124,15 @@ i915_gem_object_bind_to_gtt(struct 
> > drm_i915_gem_object *obj,
> >  
> >     i915_gem_object_pin_pages(obj);
> >  
> > +   vma = i915_gem_vma_create(obj, &dev_priv->gtt.base);
> > +   if (vma == NULL) {
> > +           i915_gem_object_unpin_pages(obj);
> > +           return -ENOMEM;
> > +   }
> > +
> >  search_free:
> >     ret = drm_mm_insert_node_in_range_generic(&dev_priv->gtt.base.mm,
> > -                                             &obj->gtt_space,
> > +                                             &vma->node,
> >                                               size, alignment,
> >                                               obj->cache_level, 0, gtt_max);
> >     if (ret) {
> > @@ -3126,22 +3146,23 @@ search_free:
> >             i915_gem_object_unpin_pages(obj);
> >             return ret;
> >     }
> > -   if (WARN_ON(!i915_gem_valid_gtt_space(dev, &obj->gtt_space,
> > +   if (WARN_ON(!i915_gem_valid_gtt_space(dev, &vma->node,
> >                                           obj->cache_level))) {
> >             i915_gem_object_unpin_pages(obj);
> > -           drm_mm_remove_node(&obj->gtt_space);
> > +           drm_mm_remove_node(&vma->node);
> >             return -EINVAL;
> >     }
> >  
> >     ret = i915_gem_gtt_prepare_object(obj);
> >     if (ret) {
> >             i915_gem_object_unpin_pages(obj);
> > -           drm_mm_remove_node(&obj->gtt_space);
> > +           drm_mm_remove_node(&vma->node);
> >             return ret;
> >     }
> 
> Freeing vma on the error path is missing.
> 
> With this and the issue in 1/5 addressed things look good to me, so on
> 1-5:
> 
> Reviewed-by: Imre Deak <imre.d...@intel.com>
> 
> --Imre

Nice catch. Rebase fail. I feel no shame in making an excuse that it was
correct in the original series.



-- 
Ben Widawsky, Intel Open Source Technology Center
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to