On Wed, Sep 23, 2015 at 04:21:22PM +0530, [email protected] wrote:
> From: Ankitprasad Sharma <[email protected]>
> 
> This patch adds support for extending the pread/pwrite functionality
> for objects not backed by shmem. The access will be made through
> gtt interface.
> This will cover prime objects as well as stolen memory backed objects
> but for userptr objects it is still forbidden.
> 
> v2: Drop locks around slow_user_access, prefault the pages before
> access (Chris)
> 
> v3: Rebased to the latest drm-intel-nightly (Ankit)
> 
> v4: Moved page base & offset calculations outside the copy loop,
> corrected data types for size and offset variables, corrected if-else
> braces format (Tvrtko/kerneldocs)
> 
> v5: Enabled pread/pwrite for all non-shmem backed objects including
> without tiling restrictions (Ankit)
> 
> v6: Using pwrite_fast for non-shmem backed objects as well (Chris)
> 
> Testcase: igt/gem_stolen
> 
> Signed-off-by: Ankitprasad Sharma <[email protected]>
> ---
>  drivers/gpu/drm/i915/i915_gem.c | 131 
> +++++++++++++++++++++++++++++++++-------
>  1 file changed, 110 insertions(+), 21 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 85025b1..b4c64d6 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -614,6 +614,99 @@ shmem_pread_slow(struct page *page, int 
> shmem_page_offset, int page_length,
>       return ret ? - EFAULT : 0;
>  }
>  
> +static inline uint64_t
> +slow_user_access(struct io_mapping *mapping,
> +              uint64_t page_base, int page_offset,
> +              char __user *user_data,
> +              int length, bool pwrite)
> +{
> +     void __iomem *vaddr_inatomic;
> +     void *vaddr;
> +     uint64_t unwritten;
> +
> +     vaddr_inatomic = io_mapping_map_wc(mapping, page_base);
> +     /* We can use the cpu mem copy function because this is X86. */
> +     vaddr = (void __force *)vaddr_inatomic + page_offset;
> +     if (pwrite)
> +             unwritten = __copy_from_user(vaddr, user_data, length);
> +     else
> +             unwritten = __copy_to_user(user_data, vaddr, length);
> +
> +     io_mapping_unmap(vaddr_inatomic);
> +     return unwritten;
> +}
> +

> @@ -831,10 +921,16 @@ i915_gem_gtt_pwrite_fast(struct drm_device *dev,
>                * source page isn't available.  Return the error and we'll
>                * retry in the slow path.
>                */
> -             if (fast_user_write(dev_priv->gtt.mappable, page_base,
> +             if (obj->base.filp &&
> +                 fast_user_write(dev_priv->gtt.mappable, page_base,
>                                   page_offset, user_data, page_length)) {
>                       ret = -EFAULT;
>                       goto out_flush;
> +             } else if (slow_user_access(dev_priv->gtt.mappable,
> +                                         page_base, page_offset,
> +                                         user_data, page_length, true)) {
> +                     ret = -EFAULT;
> +                     goto out_flush;
>               }

You cannot use slow_user_access() here as it may generate a pagefault and
trigger lock inversion as we are holding struct_mutex.

Just use fast_user_write(), it works correctly with !obj->base.filp.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
[email protected]
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to