Hi Rob,

Yes, I’d completely missed the shrinker path in this cleanup. But, yeah, I see 
how get_pages (which is called with msm_obj->lock held) -> drm_gem_get_pages 
could trigger shrinker_scan which calls msm_gem_purge.
It makes sense to prevent any get_pages/vmap on objects that’ve been marked as 
DONTNEED. I’ll send you a patch soon for that.

Thanks,
Sushmita


> On Jun 14, 2017, at 10:49 AM, Rob Clark <robdcl...@gmail.com> wrote:
> 
> On Tue, Jun 13, 2017 at 6:52 PM, Sushmita Susheelendra
> <ssush...@codeaurora.org <mailto:ssush...@codeaurora.org>> wrote:
>> Buffer object specific resources like pages, domains, sg list
>> need not be protected with struct_mutex. They can be protected
>> with a buffer object level lock. This simplifies locking and
>> makes it easier to avoid potential recursive locking scenarios
>> for SVM involving mmap_sem and struct_mutex. This also removes
>> unnecessary serialization when creating buffer objects, and also
>> between buffer object creation and GPU command submission.
> 
> I've rebased this on top of the other changes that are in the queue for 4.13:
> 
> https://github.com/freedreno/kernel-msm/commit/4e0c4e139a647914cfcf7c413da5c19f9f124885
>  
> <https://github.com/freedreno/kernel-msm/commit/4e0c4e139a647914cfcf7c413da5c19f9f124885>
> 
> I've done some light testing on a5xx.. although between this and
> moving gpu->hw_init() under struct_mutex, I need to double check on
> a3xx/a4xx.
> 
> I do think we probably need a bit more work for shrinker.  In
> particular msm_gem_purge() still assumes everything is protected by a
> single struct_mutex, which is no longer true.  The tricky thing here
> is that shrinker can be triggered by code-paths where we already hold
> msm_obj->lock.
> 
> I think we probably want anything that ends up in get_pages() or
> vmap() to have something along the lines of
> 
>  if (WARN_ON(msm_obj->madv == DONTNEED)) {
>    mutex_unlock(&msm_obj->lock);
>    return -EINVAL;   // or -EBUSY, or ??
>  }
> 
> it's really only something that a badly behaved userspace could
> triger.. but otoh we probably don't want to let a badly behaved
> userspace deadlock things in the kernel.  I guess I probably should
> have written some test cases for this by now.
> 
>> Change-Id: I4fba9f8c38a6cd13f80f660639d1e74d4336e3fb
> 
> jfyi, no need for the Change-Id.. I usually strip them out when I
> apply patches, but I appreciate if you can strip them out before
> sending (because sometimes I forget)
> 
> BR,
> -R
> 
>> Signed-off-by: Sushmita Susheelendra <ssush...@codeaurora.org>
>> ---
>> drivers/gpu/drm/msm/adreno/a5xx_gpu.c    |   2 -
>> drivers/gpu/drm/msm/adreno/a5xx_power.c  |   2 -
>> drivers/gpu/drm/msm/adreno/adreno_gpu.c  |   2 -
>> drivers/gpu/drm/msm/dsi/dsi_host.c       |   5 +-
>> drivers/gpu/drm/msm/mdp/mdp4/mdp4_crtc.c |   2 +-
>> drivers/gpu/drm/msm/mdp/mdp4/mdp4_kms.c  |   2 -
>> drivers/gpu/drm/msm/msm_drv.c            |   1 +
>> drivers/gpu/drm/msm/msm_drv.h            |   7 +-
>> drivers/gpu/drm/msm/msm_fbdev.c          |   6 +-
>> drivers/gpu/drm/msm/msm_gem.c            | 190 
>> +++++++++++++++----------------
>> drivers/gpu/drm/msm/msm_gem.h            |   2 +
>> drivers/gpu/drm/msm/msm_gem_submit.c     |   6 +-
>> drivers/gpu/drm/msm/msm_gem_vma.c        |  10 +-
>> drivers/gpu/drm/msm/msm_gpu.c            |   4 +-
>> drivers/gpu/drm/msm/msm_rd.c             |   4 +-
>> drivers/gpu/drm/msm/msm_ringbuffer.c     |   2 +-
>> 16 files changed, 120 insertions(+), 127 deletions(-)
>> 
>> diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c 
>> b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
>> index 31a9bce..7893de1 100644
>> --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
>> +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
>> @@ -235,9 +235,7 @@ static struct drm_gem_object *a5xx_ucode_load_bo(struct 
>> msm_gpu *gpu,
>>        struct drm_gem_object *bo;
>>        void *ptr;
>> 
>> -       mutex_lock(&drm->struct_mutex);
>>        bo = msm_gem_new(drm, fw->size - 4, MSM_BO_UNCACHED);
>> -       mutex_unlock(&drm->struct_mutex);
>> 
>>        if (IS_ERR(bo))
>>                return bo;
>> diff --git a/drivers/gpu/drm/msm/adreno/a5xx_power.c 
>> b/drivers/gpu/drm/msm/adreno/a5xx_power.c
>> index 72d52c7..eb88f44 100644
>> --- a/drivers/gpu/drm/msm/adreno/a5xx_power.c
>> +++ b/drivers/gpu/drm/msm/adreno/a5xx_power.c
>> @@ -294,9 +294,7 @@ void a5xx_gpmu_ucode_init(struct msm_gpu *gpu)
>>         */
>>        bosize = (cmds_size + (cmds_size / TYPE4_MAX_PAYLOAD) + 1) << 2;
>> 
>> -       mutex_lock(&drm->struct_mutex);
>>        a5xx_gpu->gpmu_bo = msm_gem_new(drm, bosize, MSM_BO_UNCACHED);
>> -       mutex_unlock(&drm->struct_mutex);
>> 
>>        if (IS_ERR(a5xx_gpu->gpmu_bo))
>>                goto err;
>> diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c 
>> b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
>> index 5b63fc6..1162c15 100644
>> --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
>> +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
>> @@ -392,10 +392,8 @@ int adreno_gpu_init(struct drm_device *drm, struct 
>> platform_device *pdev,
>>                        return ret;
>>        }
>> 
>> -       mutex_lock(&drm->struct_mutex);
>>        adreno_gpu->memptrs_bo = msm_gem_new(drm, 
>> sizeof(*adreno_gpu->memptrs),
>>                        MSM_BO_UNCACHED);
>> -       mutex_unlock(&drm->struct_mutex);
>>        if (IS_ERR(adreno_gpu->memptrs_bo)) {
>>                ret = PTR_ERR(adreno_gpu->memptrs_bo);
>>                adreno_gpu->memptrs_bo = NULL;
>> diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c 
>> b/drivers/gpu/drm/msm/dsi/dsi_host.c
>> index 4f79b10..6a1b0da 100644
>> --- a/drivers/gpu/drm/msm/dsi/dsi_host.c
>> +++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
>> @@ -980,19 +980,16 @@ static int dsi_tx_buf_alloc(struct msm_dsi_host 
>> *msm_host, int size)
>>        uint64_t iova;
>> 
>>        if (cfg_hnd->major == MSM_DSI_VER_MAJOR_6G) {
>> -               mutex_lock(&dev->struct_mutex);
>>                msm_host->tx_gem_obj = msm_gem_new(dev, size, 
>> MSM_BO_UNCACHED);
>>                if (IS_ERR(msm_host->tx_gem_obj)) {
>>                        ret = PTR_ERR(msm_host->tx_gem_obj);
>>                        pr_err("%s: failed to allocate gem, %d\n",
>>                                __func__, ret);
>>                        msm_host->tx_gem_obj = NULL;
>> -                       mutex_unlock(&dev->struct_mutex);
>>                        return ret;
>>                }
>> 
>> -               ret = msm_gem_get_iova_locked(msm_host->tx_gem_obj, 0, 
>> &iova);
>> -               mutex_unlock(&dev->struct_mutex);
>> +               ret = msm_gem_get_iova(msm_host->tx_gem_obj, 0, &iova);
>>                if (ret) {
>>                        pr_err("%s: failed to get iova, %d\n", __func__, ret);
>>                        return ret;
>> diff --git a/drivers/gpu/drm/msm/mdp/mdp4/mdp4_crtc.c 
>> b/drivers/gpu/drm/msm/mdp/mdp4/mdp4_crtc.c
>> index f29194a..15478f8 100644
>> --- a/drivers/gpu/drm/msm/mdp/mdp4/mdp4_crtc.c
>> +++ b/drivers/gpu/drm/msm/mdp/mdp4/mdp4_crtc.c
>> @@ -372,7 +372,7 @@ static void update_cursor(struct drm_crtc *crtc)
>>                if (next_bo) {
>>                        /* take a obj ref + iova ref when we start scanning 
>> out: */
>>                        drm_gem_object_reference(next_bo);
>> -                       msm_gem_get_iova_locked(next_bo, mdp4_kms->id, 
>> &iova);
>> +                       msm_gem_get_iova(next_bo, mdp4_kms->id, &iova);
>> 
>>                        /* enable cursor: */
>>                        mdp4_write(mdp4_kms, REG_MDP4_DMA_CURSOR_SIZE(dma),
>> diff --git a/drivers/gpu/drm/msm/mdp/mdp4/mdp4_kms.c 
>> b/drivers/gpu/drm/msm/mdp/mdp4/mdp4_kms.c
>> index 6295204..3a464b3 100644
>> --- a/drivers/gpu/drm/msm/mdp/mdp4/mdp4_kms.c
>> +++ b/drivers/gpu/drm/msm/mdp/mdp4/mdp4_kms.c
>> @@ -561,9 +561,7 @@ struct msm_kms *mdp4_kms_init(struct drm_device *dev)
>>                goto fail;
>>        }
>> 
>> -       mutex_lock(&dev->struct_mutex);
>>        mdp4_kms->blank_cursor_bo = msm_gem_new(dev, SZ_16K, MSM_BO_WC);
>> -       mutex_unlock(&dev->struct_mutex);
>>        if (IS_ERR(mdp4_kms->blank_cursor_bo)) {
>>                ret = PTR_ERR(mdp4_kms->blank_cursor_bo);
>>                dev_err(dev->dev, "could not allocate blank-cursor bo: %d\n", 
>> ret);
>> diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
>> index 87b5695..bb1f3ee 100644
>> --- a/drivers/gpu/drm/msm/msm_drv.c
>> +++ b/drivers/gpu/drm/msm/msm_drv.c
>> @@ -349,6 +349,7 @@ static int msm_init_vram(struct drm_device *dev)
>>                priv->vram.size = size;
>> 
>>                drm_mm_init(&priv->vram.mm, 0, (size >> PAGE_SHIFT) - 1);
>> +               spin_lock_init(&priv->vram.lock);
>> 
>>                attrs |= DMA_ATTR_NO_KERNEL_MAPPING;
>>                attrs |= DMA_ATTR_WRITE_COMBINE;
>> diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
>> index 28b6f9b..567737d 100644
>> --- a/drivers/gpu/drm/msm/msm_drv.h
>> +++ b/drivers/gpu/drm/msm/msm_drv.h
>> @@ -157,6 +157,7 @@ struct msm_drm_private {
>>                 * and position mm_node->start is in # of pages:
>>                 */
>>                struct drm_mm mm;
>> +               spinlock_t lock; /* Protects drm_mm node allocation/removal 
>> */
>>        } vram;
>> 
>>        struct notifier_block vmap_notifier;
>> @@ -209,8 +210,6 @@ int msm_gem_mmap_obj(struct drm_gem_object *obj,
>> int msm_gem_mmap(struct file *filp, struct vm_area_struct *vma);
>> int msm_gem_fault(struct vm_fault *vmf);
>> uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj);
>> -int msm_gem_get_iova_locked(struct drm_gem_object *obj, int id,
>> -               uint64_t *iova);
>> int msm_gem_get_iova(struct drm_gem_object *obj, int id, uint64_t *iova);
>> uint64_t msm_gem_iova(struct drm_gem_object *obj, int id);
>> struct page **msm_gem_get_pages(struct drm_gem_object *obj);
>> @@ -228,9 +227,7 @@ struct drm_gem_object 
>> *msm_gem_prime_import_sg_table(struct drm_device *dev,
>>                struct dma_buf_attachment *attach, struct sg_table *sg);
>> int msm_gem_prime_pin(struct drm_gem_object *obj);
>> void msm_gem_prime_unpin(struct drm_gem_object *obj);
>> -void *msm_gem_get_vaddr_locked(struct drm_gem_object *obj);
>> void *msm_gem_get_vaddr(struct drm_gem_object *obj);
>> -void msm_gem_put_vaddr_locked(struct drm_gem_object *obj);
>> void msm_gem_put_vaddr(struct drm_gem_object *obj);
>> int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv);
>> void msm_gem_purge(struct drm_gem_object *obj);
>> @@ -247,6 +244,8 @@ int msm_gem_new_handle(struct drm_device *dev, struct 
>> drm_file *file,
>>                uint32_t size, uint32_t flags, uint32_t *handle);
>> struct drm_gem_object *msm_gem_new(struct drm_device *dev,
>>                uint32_t size, uint32_t flags);
>> +struct drm_gem_object *msm_gem_new_locked(struct drm_device *dev,
>> +               uint32_t size, uint32_t flags);
>> struct drm_gem_object *msm_gem_import(struct drm_device *dev,
>>                struct dma_buf *dmabuf, struct sg_table *sgt);
>> 
>> diff --git a/drivers/gpu/drm/msm/msm_fbdev.c 
>> b/drivers/gpu/drm/msm/msm_fbdev.c
>> index 951e40f..785925e9 100644
>> --- a/drivers/gpu/drm/msm/msm_fbdev.c
>> +++ b/drivers/gpu/drm/msm/msm_fbdev.c
>> @@ -95,10 +95,8 @@ static int msm_fbdev_create(struct drm_fb_helper *helper,
>>        /* allocate backing bo */
>>        size = mode_cmd.pitches[0] * mode_cmd.height;
>>        DBG("allocating %d bytes for fb %d", size, dev->primary->index);
>> -       mutex_lock(&dev->struct_mutex);
>>        fbdev->bo = msm_gem_new(dev, size, MSM_BO_SCANOUT |
>>                        MSM_BO_WC | MSM_BO_STOLEN);
>> -       mutex_unlock(&dev->struct_mutex);
>>        if (IS_ERR(fbdev->bo)) {
>>                ret = PTR_ERR(fbdev->bo);
>>                fbdev->bo = NULL;
>> @@ -124,7 +122,7 @@ static int msm_fbdev_create(struct drm_fb_helper *helper,
>>         * in panic (ie. lock-safe, etc) we could avoid pinning the
>>         * buffer now:
>>         */
>> -       ret = msm_gem_get_iova_locked(fbdev->bo, 0, &paddr);
>> +       ret = msm_gem_get_iova(fbdev->bo, 0, &paddr);
>>        if (ret) {
>>                dev_err(dev->dev, "failed to get buffer obj iova: %d\n", ret);
>>                goto fail_unlock;
>> @@ -153,7 +151,7 @@ static int msm_fbdev_create(struct drm_fb_helper *helper,
>> 
>>        dev->mode_config.fb_base = paddr;
>> 
>> -       fbi->screen_base = msm_gem_get_vaddr_locked(fbdev->bo);
>> +       fbi->screen_base = msm_gem_get_vaddr(fbdev->bo);
>>        if (IS_ERR(fbi->screen_base)) {
>>                ret = PTR_ERR(fbi->screen_base);
>>                goto fail_unlock;
>> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
>> index 68e509b..1e803fb 100644
>> --- a/drivers/gpu/drm/msm/msm_gem.c
>> +++ b/drivers/gpu/drm/msm/msm_gem.c
>> @@ -41,8 +41,7 @@ static bool use_pages(struct drm_gem_object *obj)
>> }
>> 
>> /* allocate pages from VRAM carveout, used when no IOMMU: */
>> -static struct page **get_pages_vram(struct drm_gem_object *obj,
>> -               int npages)
>> +static struct page **get_pages_vram(struct drm_gem_object *obj, int npages)
>> {
>>        struct msm_gem_object *msm_obj = to_msm_bo(obj);
>>        struct msm_drm_private *priv = obj->dev->dev_private;
>> @@ -54,7 +53,9 @@ static struct page **get_pages_vram(struct drm_gem_object 
>> *obj,
>>        if (!p)
>>                return ERR_PTR(-ENOMEM);
>> 
>> +       spin_lock(&priv->vram.lock);
>>        ret = drm_mm_insert_node(&priv->vram.mm, msm_obj->vram_node, npages);
>> +       spin_unlock(&priv->vram.lock);
>>        if (ret) {
>>                drm_free_large(p);
>>                return ERR_PTR(ret);
>> @@ -69,7 +70,6 @@ static struct page **get_pages_vram(struct drm_gem_object 
>> *obj,
>>        return p;
>> }
>> 
>> -/* called with dev->struct_mutex held */
>> static struct page **get_pages(struct drm_gem_object *obj)
>> {
>>        struct msm_gem_object *msm_obj = to_msm_bo(obj);
>> @@ -109,6 +109,18 @@ static struct page **get_pages(struct drm_gem_object 
>> *obj)
>>        return msm_obj->pages;
>> }
>> 
>> +static void put_pages_vram(struct drm_gem_object *obj)
>> +{
>> +       struct msm_gem_object *msm_obj = to_msm_bo(obj);
>> +       struct msm_drm_private *priv = obj->dev->dev_private;
>> +
>> +       spin_lock(&priv->vram.lock);
>> +       drm_mm_remove_node(msm_obj->vram_node);
>> +       spin_unlock(&priv->vram.lock);
>> +
>> +       drm_free_large(msm_obj->pages);
>> +}
>> +
>> static void put_pages(struct drm_gem_object *obj)
>> {
>>        struct msm_gem_object *msm_obj = to_msm_bo(obj);
>> @@ -125,10 +137,8 @@ static void put_pages(struct drm_gem_object *obj)
>> 
>>                if (use_pages(obj))
>>                        drm_gem_put_pages(obj, msm_obj->pages, true, false);
>> -               else {
>> -                       drm_mm_remove_node(msm_obj->vram_node);
>> -                       drm_free_large(msm_obj->pages);
>> -               }
>> +               else
>> +                       put_pages_vram(obj);
>> 
>>                msm_obj->pages = NULL;
>>        }
>> @@ -136,11 +146,12 @@ static void put_pages(struct drm_gem_object *obj)
>> 
>> struct page **msm_gem_get_pages(struct drm_gem_object *obj)
>> {
>> -       struct drm_device *dev = obj->dev;
>> +       struct msm_gem_object *msm_obj = to_msm_bo(obj);
>>        struct page **p;
>> -       mutex_lock(&dev->struct_mutex);
>> +
>> +       mutex_lock(&msm_obj->lock);
>>        p = get_pages(obj);
>> -       mutex_unlock(&dev->struct_mutex);
>> +       mutex_unlock(&msm_obj->lock);
>>        return p;
>> }
>> 
>> @@ -195,25 +206,17 @@ int msm_gem_fault(struct vm_fault *vmf)
>> {
>>        struct vm_area_struct *vma = vmf->vma;
>>        struct drm_gem_object *obj = vma->vm_private_data;
>> -       struct drm_device *dev = obj->dev;
>> -       struct msm_drm_private *priv = dev->dev_private;
>> +       struct msm_gem_object *msm_obj = to_msm_bo(obj);
>>        struct page **pages;
>>        unsigned long pfn;
>>        pgoff_t pgoff;
>>        int ret;
>> 
>> -       /* This should only happen if userspace tries to pass a mmap'd
>> -        * but unfaulted gem bo vaddr into submit ioctl, triggering
>> -        * a page fault while struct_mutex is already held.  This is
>> -        * not a valid use-case so just bail.
>> +       /*
>> +        * vm_ops.open/drm_gem_mmap_obj and close get and put
>> +        * a reference on obj. So, we dont need to hold one here.
>>         */
>> -       if (priv->struct_mutex_task == current)
>> -               return VM_FAULT_SIGBUS;
>> -
>> -       /* Make sure we don't parallel update on a fault, nor move or remove
>> -        * something from beneath our feet
>> -        */
>> -       ret = mutex_lock_interruptible(&dev->struct_mutex);
>> +       ret = mutex_lock_interruptible(&msm_obj->lock);
>>        if (ret)
>>                goto out;
>> 
>> @@ -235,7 +238,7 @@ int msm_gem_fault(struct vm_fault *vmf)
>>        ret = vm_insert_mixed(vma, vmf->address, __pfn_to_pfn_t(pfn, 
>> PFN_DEV));
>> 
>> out_unlock:
>> -       mutex_unlock(&dev->struct_mutex);
>> +       mutex_unlock(&msm_obj->lock);
>> out:
>>        switch (ret) {
>>        case -EAGAIN:
>> @@ -259,9 +262,10 @@ int msm_gem_fault(struct vm_fault *vmf)
>> static uint64_t mmap_offset(struct drm_gem_object *obj)
>> {
>>        struct drm_device *dev = obj->dev;
>> +       struct msm_gem_object *msm_obj = to_msm_bo(obj);
>>        int ret;
>> 
>> -       WARN_ON(!mutex_is_locked(&dev->struct_mutex));
>> +       WARN_ON(!mutex_is_locked(&msm_obj->lock));
>> 
>>        /* Make it mmapable */
>>        ret = drm_gem_create_mmap_offset(obj);
>> @@ -277,21 +281,23 @@ static uint64_t mmap_offset(struct drm_gem_object *obj)
>> uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj)
>> {
>>        uint64_t offset;
>> -       mutex_lock(&obj->dev->struct_mutex);
>> +       struct msm_gem_object *msm_obj = to_msm_bo(obj);
>> +
>> +       mutex_lock(&msm_obj->lock);
>>        offset = mmap_offset(obj);
>> -       mutex_unlock(&obj->dev->struct_mutex);
>> +       mutex_unlock(&msm_obj->lock);
>>        return offset;
>> }
>> 
>> +/* Called with msm_obj->lock locked */
>> static void
>> put_iova(struct drm_gem_object *obj)
>> {
>> -       struct drm_device *dev = obj->dev;
>>        struct msm_drm_private *priv = obj->dev->dev_private;
>>        struct msm_gem_object *msm_obj = to_msm_bo(obj);
>>        int id;
>> 
>> -       WARN_ON(!mutex_is_locked(&dev->struct_mutex));
>> +       WARN_ON(!mutex_is_locked(&msm_obj->lock));
>> 
>>        for (id = 0; id < ARRAY_SIZE(msm_obj->domain); id++) {
>>                if (!priv->aspace[id])
>> @@ -301,25 +307,23 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object 
>> *obj)
>>        }
>> }
>> 
>> -/* should be called under struct_mutex.. although it can be called
>> - * from atomic context without struct_mutex to acquire an extra
>> - * iova ref if you know one is already held.
>> - *
>> - * That means when I do eventually need to add support for unpinning
>> - * the refcnt counter needs to be atomic_t.
>> - */
>> -int msm_gem_get_iova_locked(struct drm_gem_object *obj, int id,
>> +/* A reference to obj must be held before calling this function. */
>> +int msm_gem_get_iova(struct drm_gem_object *obj, int id,
>>                uint64_t *iova)
>> {
>>        struct msm_gem_object *msm_obj = to_msm_bo(obj);
>>        int ret = 0;
>> 
>> +       mutex_lock(&msm_obj->lock);
>> +
>>        if (!msm_obj->domain[id].iova) {
>>                struct msm_drm_private *priv = obj->dev->dev_private;
>>                struct page **pages = get_pages(obj);
>> 
>> -               if (IS_ERR(pages))
>> +               if (IS_ERR(pages)) {
>> +                       mutex_unlock(&msm_obj->lock);
>>                        return PTR_ERR(pages);
>> +               }
>> 
>>                if (iommu_present(&platform_bus_type)) {
>>                        ret = msm_gem_map_vma(priv->aspace[id], 
>> &msm_obj->domain[id],
>> @@ -332,26 +336,7 @@ int msm_gem_get_iova_locked(struct drm_gem_object *obj, 
>> int id,
>>        if (!ret)
>>                *iova = msm_obj->domain[id].iova;
>> 
>> -       return ret;
>> -}
>> -
>> -/* get iova, taking a reference.  Should have a matching put */
>> -int msm_gem_get_iova(struct drm_gem_object *obj, int id, uint64_t *iova)
>> -{
>> -       struct msm_gem_object *msm_obj = to_msm_bo(obj);
>> -       int ret;
>> -
>> -       /* this is safe right now because we don't unmap until the
>> -        * bo is deleted:
>> -        */
>> -       if (msm_obj->domain[id].iova) {
>> -               *iova = msm_obj->domain[id].iova;
>> -               return 0;
>> -       }
>> -
>> -       mutex_lock(&obj->dev->struct_mutex);
>> -       ret = msm_gem_get_iova_locked(obj, id, iova);
>> -       mutex_unlock(&obj->dev->struct_mutex);
>> +       mutex_unlock(&msm_obj->lock);
>>        return ret;
>> }
>> 
>> @@ -405,45 +390,37 @@ int msm_gem_dumb_map_offset(struct drm_file *file, 
>> struct drm_device *dev,
>>        return ret;
>> }
>> 
>> -void *msm_gem_get_vaddr_locked(struct drm_gem_object *obj)
>> +void *msm_gem_get_vaddr(struct drm_gem_object *obj)
>> {
>>        struct msm_gem_object *msm_obj = to_msm_bo(obj);
>> -       WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex));
>> +
>> +       mutex_lock(&msm_obj->lock);
>>        if (!msm_obj->vaddr) {
>>                struct page **pages = get_pages(obj);
>> -               if (IS_ERR(pages))
>> +               if (IS_ERR(pages)) {
>> +                       mutex_unlock(&msm_obj->lock);
>>                        return ERR_CAST(pages);
>> +               }
>>                msm_obj->vaddr = vmap(pages, obj->size >> PAGE_SHIFT,
>>                                VM_MAP, pgprot_writecombine(PAGE_KERNEL));
>> -               if (msm_obj->vaddr == NULL)
>> +               if (msm_obj->vaddr == NULL) {
>> +                       mutex_unlock(&msm_obj->lock);
>>                        return ERR_PTR(-ENOMEM);
>> +               }
>>        }
>>        msm_obj->vmap_count++;
>> +       mutex_unlock(&msm_obj->lock);
>>        return msm_obj->vaddr;
>> }
>> 
>> -void *msm_gem_get_vaddr(struct drm_gem_object *obj)
>> -{
>> -       void *ret;
>> -       mutex_lock(&obj->dev->struct_mutex);
>> -       ret = msm_gem_get_vaddr_locked(obj);
>> -       mutex_unlock(&obj->dev->struct_mutex);
>> -       return ret;
>> -}
>> -
>> -void msm_gem_put_vaddr_locked(struct drm_gem_object *obj)
>> +void msm_gem_put_vaddr(struct drm_gem_object *obj)
>> {
>>        struct msm_gem_object *msm_obj = to_msm_bo(obj);
>> -       WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex));
>> +
>> +       mutex_lock(&msm_obj->lock);
>>        WARN_ON(msm_obj->vmap_count < 1);
>>        msm_obj->vmap_count--;
>> -}
>> -
>> -void msm_gem_put_vaddr(struct drm_gem_object *obj)
>> -{
>> -       mutex_lock(&obj->dev->struct_mutex);
>> -       msm_gem_put_vaddr_locked(obj);
>> -       mutex_unlock(&obj->dev->struct_mutex);
>> +       mutex_unlock(&msm_obj->lock);
>> }
>> 
>> /* Update madvise status, returns true if not purged, else
>> @@ -697,6 +674,8 @@ void msm_gem_free_object(struct drm_gem_object *obj)
>> 
>>        list_del(&msm_obj->mm_list);
>> 
>> +       mutex_lock(&msm_obj->lock);
>> +
>>        put_iova(obj);
>> 
>>        if (obj->import_attach) {
>> @@ -720,6 +699,7 @@ void msm_gem_free_object(struct drm_gem_object *obj)
>> 
>>        drm_gem_object_release(obj);
>> 
>> +       mutex_unlock(&msm_obj->lock);
>>        kfree(msm_obj);
>> }
>> 
>> @@ -730,14 +710,8 @@ int msm_gem_new_handle(struct drm_device *dev, struct 
>> drm_file *file,
>>        struct drm_gem_object *obj;
>>        int ret;
>> 
>> -       ret = mutex_lock_interruptible(&dev->struct_mutex);
>> -       if (ret)
>> -               return ret;
>> -
>>        obj = msm_gem_new(dev, size, flags);
>> 
>> -       mutex_unlock(&dev->struct_mutex);
>> -
>>        if (IS_ERR(obj))
>>                return PTR_ERR(obj);
>> 
>> @@ -752,7 +726,8 @@ int msm_gem_new_handle(struct drm_device *dev, struct 
>> drm_file *file,
>> static int msm_gem_new_impl(struct drm_device *dev,
>>                uint32_t size, uint32_t flags,
>>                struct reservation_object *resv,
>> -               struct drm_gem_object **obj)
>> +               struct drm_gem_object **obj,
>> +               bool struct_mutex_locked)
>> {
>>        struct msm_drm_private *priv = dev->dev_private;
>>        struct msm_gem_object *msm_obj;
>> @@ -781,6 +756,8 @@ static int msm_gem_new_impl(struct drm_device *dev,
>>        if (!msm_obj)
>>                return -ENOMEM;
>> 
>> +       mutex_init(&msm_obj->lock);
>> +
>>        if (use_vram)
>>                msm_obj->vram_node = &msm_obj->domain[0].node;
>> 
>> @@ -795,21 +772,25 @@ static int msm_gem_new_impl(struct drm_device *dev,
>>        }
>> 
>>        INIT_LIST_HEAD(&msm_obj->submit_entry);
>> -       list_add_tail(&msm_obj->mm_list, &priv->inactive_list);
>> +       if (struct_mutex_locked) {
>> +               list_add_tail(&msm_obj->mm_list, &priv->inactive_list);
>> +       } else {
>> +               mutex_lock(&dev->struct_mutex);
>> +               list_add_tail(&msm_obj->mm_list, &priv->inactive_list);
>> +               mutex_unlock(&dev->struct_mutex);
>> +       }
>> 
>>        *obj = &msm_obj->base;
>> 
>>        return 0;
>> }
>> 
>> -struct drm_gem_object *msm_gem_new(struct drm_device *dev,
>> -               uint32_t size, uint32_t flags)
>> +static struct drm_gem_object *_msm_gem_new(struct drm_device *dev,
>> +               uint32_t size, uint32_t flags, bool struct_mutex_locked)
>> {
>>        struct drm_gem_object *obj = NULL;
>>        int ret;
>> 
>> -       WARN_ON(!mutex_is_locked(&dev->struct_mutex));
>> -
>>        size = PAGE_ALIGN(size);
>> 
>>        /* Disallow zero sized objects as they make the underlying
>> @@ -818,7 +799,7 @@ struct drm_gem_object *msm_gem_new(struct drm_device 
>> *dev,
>>        if (size == 0)
>>                return ERR_PTR(-EINVAL);
>> 
>> -       ret = msm_gem_new_impl(dev, size, flags, NULL, &obj);
>> +       ret = msm_gem_new_impl(dev, size, flags, NULL, &obj, 
>> struct_mutex_locked);
>>        if (ret)
>>                goto fail;
>> 
>> @@ -833,10 +814,22 @@ struct drm_gem_object *msm_gem_new(struct drm_device 
>> *dev,
>>        return obj;
>> 
>> fail:
>> -       drm_gem_object_unreference(obj);
>> +       drm_gem_object_unreference_unlocked(obj);
>>        return ERR_PTR(ret);
>> }
>> 
>> +struct drm_gem_object *msm_gem_new_locked(struct drm_device *dev,
>> +               uint32_t size, uint32_t flags)
>> +{
>> +       return _msm_gem_new(dev, size, flags, true);
>> +}
>> +
>> +struct drm_gem_object *msm_gem_new(struct drm_device *dev,
>> +               uint32_t size, uint32_t flags)
>> +{
>> +       return _msm_gem_new(dev, size, flags, false);
>> +}
>> +
>> struct drm_gem_object *msm_gem_import(struct drm_device *dev,
>>                struct dma_buf *dmabuf, struct sg_table *sgt)
>> {
>> @@ -853,7 +846,7 @@ struct drm_gem_object *msm_gem_import(struct drm_device 
>> *dev,
>> 
>>        size = PAGE_ALIGN(dmabuf->size);
>> 
>> -       ret = msm_gem_new_impl(dev, size, MSM_BO_WC, dmabuf->resv, &obj);
>> +       ret = msm_gem_new_impl(dev, size, MSM_BO_WC, dmabuf->resv, &obj, 
>> false);
>>        if (ret)
>>                goto fail;
>> 
>> @@ -862,17 +855,22 @@ struct drm_gem_object *msm_gem_import(struct 
>> drm_device *dev,
>>        npages = size / PAGE_SIZE;
>> 
>>        msm_obj = to_msm_bo(obj);
>> +       mutex_lock(&msm_obj->lock);
>>        msm_obj->sgt = sgt;
>>        msm_obj->pages = drm_malloc_ab(npages, sizeof(struct page *));
>>        if (!msm_obj->pages) {
>> +               mutex_unlock(&msm_obj->lock);
>>                ret = -ENOMEM;
>>                goto fail;
>>        }
>> 
>>        ret = drm_prime_sg_to_page_addr_arrays(sgt, msm_obj->pages, NULL, 
>> npages);
>> -       if (ret)
>> +       if (ret) {
>> +               mutex_unlock(&msm_obj->lock);
>>                goto fail;
>> +       }
>> 
>> +       mutex_unlock(&msm_obj->lock);
>>        return obj;
>> 
>> fail:
>> diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
>> index 1b4cf20..b1bfc10 100644
>> --- a/drivers/gpu/drm/msm/msm_gem.h
>> +++ b/drivers/gpu/drm/msm/msm_gem.h
>> @@ -31,6 +31,7 @@ struct msm_gem_address_space {
>>         * and position mm_node->start is in # of pages:
>>         */
>>        struct drm_mm mm;
>> +       spinlock_t lock; /* Protects drm_mm node allocation/removal */
>>        struct msm_mmu *mmu;
>>        struct kref kref;
>> };
>> @@ -87,6 +88,7 @@ struct msm_gem_object {
>>         * an IOMMU.  Also used for stolen/splashscreen buffer.
>>         */
>>        struct drm_mm_node *vram_node;
>> +       struct mutex lock; /* Protects resources associated with bo */
>> };
>> #define to_msm_bo(x) container_of(x, struct msm_gem_object, base)
>> 
>> diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c 
>> b/drivers/gpu/drm/msm/msm_gem_submit.c
>> index 1c545eb..8420551 100644
>> --- a/drivers/gpu/drm/msm/msm_gem_submit.c
>> +++ b/drivers/gpu/drm/msm/msm_gem_submit.c
>> @@ -245,7 +245,7 @@ static int submit_pin_objects(struct msm_gem_submit 
>> *submit)
>>                uint64_t iova;
>> 
>>                /* if locking succeeded, pin bo: */
>> -               ret = msm_gem_get_iova_locked(&msm_obj->base,
>> +               ret = msm_gem_get_iova(&msm_obj->base,
>>                                submit->gpu->id, &iova);
>> 
>>                if (ret)
>> @@ -301,7 +301,7 @@ static int submit_reloc(struct msm_gem_submit *submit, 
>> struct msm_gem_object *ob
>>        /* For now, just map the entire thing.  Eventually we probably
>>         * to do it page-by-page, w/ kmap() if not vmap()d..
>>         */
>> -       ptr = msm_gem_get_vaddr_locked(&obj->base);
>> +       ptr = msm_gem_get_vaddr(&obj->base);
>> 
>>        if (IS_ERR(ptr)) {
>>                ret = PTR_ERR(ptr);
>> @@ -359,7 +359,7 @@ static int submit_reloc(struct msm_gem_submit *submit, 
>> struct msm_gem_object *ob
>>        }
>> 
>> out:
>> -       msm_gem_put_vaddr_locked(&obj->base);
>> +       msm_gem_put_vaddr(&obj->base);
>> 
>>        return ret;
>> }
>> diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c 
>> b/drivers/gpu/drm/msm/msm_gem_vma.c
>> index f285d7e..c36321bc 100644
>> --- a/drivers/gpu/drm/msm/msm_gem_vma.c
>> +++ b/drivers/gpu/drm/msm/msm_gem_vma.c
>> @@ -50,7 +50,9 @@ void msm_gem_address_space_put(struct 
>> msm_gem_address_space *aspace)
>>                aspace->mmu->funcs->unmap(aspace->mmu, vma->iova, sgt, size);
>>        }
>> 
>> +       spin_lock(&aspace->lock);
>>        drm_mm_remove_node(&vma->node);
>> +       spin_unlock(&aspace->lock);
>> 
>>        vma->iova = 0;
>> 
>> @@ -63,10 +65,15 @@ void msm_gem_address_space_put(struct 
>> msm_gem_address_space *aspace)
>> {
>>        int ret;
>> 
>> -       if (WARN_ON(drm_mm_node_allocated(&vma->node)))
>> +       spin_lock(&aspace->lock);
>> +       if (WARN_ON(drm_mm_node_allocated(&vma->node))) {
>> +               spin_unlock(&aspace->lock);
>>                return 0;
>> +       }
>> 
>>        ret = drm_mm_insert_node(&aspace->mm, &vma->node, npages);
>> +       spin_unlock(&aspace->lock);
>> +
>>        if (ret)
>>                return ret;
>> 
>> @@ -94,6 +101,7 @@ struct msm_gem_address_space *
>>        if (!aspace)
>>                return ERR_PTR(-ENOMEM);
>> 
>> +       spin_lock_init(&aspace->lock);
>>        aspace->name = name;
>>        aspace->mmu = msm_iommu_new(dev, domain);
>> 
>> diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
>> index 97b9c38..d6e5cb2 100644
>> --- a/drivers/gpu/drm/msm/msm_gpu.c
>> +++ b/drivers/gpu/drm/msm/msm_gpu.c
>> @@ -495,7 +495,7 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct 
>> msm_gem_submit *submit,
>> 
>>                /* submit takes a reference to the bo and iova until retired: 
>> */
>>                drm_gem_object_reference(&msm_obj->base);
>> -               msm_gem_get_iova_locked(&msm_obj->base,
>> +               msm_gem_get_iova(&msm_obj->base,
>>                                submit->gpu->id, &iova);
>> 
>>                if (submit->bos[i].flags & MSM_SUBMIT_BO_WRITE)
>> @@ -662,9 +662,7 @@ int msm_gpu_init(struct drm_device *drm, struct 
>> platform_device *pdev,
>> 
>> 
>>        /* Create ringbuffer: */
>> -       mutex_lock(&drm->struct_mutex);
>>        gpu->rb = msm_ringbuffer_new(gpu, ringsz);
>> -       mutex_unlock(&drm->struct_mutex);
>>        if (IS_ERR(gpu->rb)) {
>>                ret = PTR_ERR(gpu->rb);
>>                gpu->rb = NULL;
>> diff --git a/drivers/gpu/drm/msm/msm_rd.c b/drivers/gpu/drm/msm/msm_rd.c
>> index 0e81faa..0366b80 100644
>> --- a/drivers/gpu/drm/msm/msm_rd.c
>> +++ b/drivers/gpu/drm/msm/msm_rd.c
>> @@ -268,7 +268,7 @@ static void snapshot_buf(struct msm_rd_state *rd,
>>        struct msm_gem_object *obj = submit->bos[idx].obj;
>>        const char *buf;
>> 
>> -       buf = msm_gem_get_vaddr_locked(&obj->base);
>> +       buf = msm_gem_get_vaddr(&obj->base);
>>        if (IS_ERR(buf))
>>                return;
>> 
>> @@ -283,7 +283,7 @@ static void snapshot_buf(struct msm_rd_state *rd,
>>                        (uint32_t[3]){ iova, size, iova >> 32 }, 12);
>>        rd_write_section(rd, RD_BUFFER_CONTENTS, buf, size);
>> 
>> -       msm_gem_put_vaddr_locked(&obj->base);
>> +       msm_gem_put_vaddr(&obj->base);
>> }
>> 
>> /* called under struct_mutex */
>> diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c 
>> b/drivers/gpu/drm/msm/msm_ringbuffer.c
>> index 67b34e0..791bca3 100644
>> --- a/drivers/gpu/drm/msm/msm_ringbuffer.c
>> +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c
>> @@ -40,7 +40,7 @@ struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu 
>> *gpu, int size)
>>                goto fail;
>>        }
>> 
>> -       ring->start = msm_gem_get_vaddr_locked(ring->bo);
>> +       ring->start = msm_gem_get_vaddr(ring->bo);
>>        if (IS_ERR(ring->start)) {
>>                ret = PTR_ERR(ring->start);
>>                goto fail;
>> --
>> 1.9.1
>> 
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in
>> the body of a message to majord...@vger.kernel.org 
>> <mailto:majord...@vger.kernel.org>
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html 
>> <http://vger.kernel.org/majordomo-info.html>
_______________________________________________
Freedreno mailing list
Freedreno@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/freedreno

Reply via email to