On Wed, Aug 07, 2019 at 08:02:14AM -0700, Ralph Campbell wrote:
> When memory is migrated to the GPU it is likely to be accessed by GPU
> code soon afterwards. Instead of waiting for a GPU fault, map the
> migrated memory into the GPU page tables with the same access permissions
> as the source CPU page table entries. This preserves copy on write
> semantics.
> 
> Signed-off-by: Ralph Campbell <rcampb...@nvidia.com>
> Cc: Christoph Hellwig <h...@lst.de>
> Cc: Jason Gunthorpe <j...@mellanox.com>
> Cc: "Jérôme Glisse" <jgli...@redhat.com>
> Cc: Ben Skeggs <bske...@redhat.com>
> ---
> 
> This patch is based on top of Christoph Hellwig's 9 patch series
> https://lore.kernel.org/linux-mm/20190729234611.gc7...@redhat.com/T/#u
> "turn the hmm migrate_vma upside down" but without patch 9
> "mm: remove the unused MIGRATE_PFN_WRITE" and adds a use for the flag.

This looks useful.  I've already dropped that patch for the pending
resend.

>  static unsigned long nouveau_dmem_migrate_copy_one(struct nouveau_drm *drm,
> -             struct vm_area_struct *vma, unsigned long addr,
> -             unsigned long src, dma_addr_t *dma_addr)
> +             struct vm_area_struct *vma, unsigned long src,
> +             dma_addr_t *dma_addr, u64 *pfn)

I'll pick up the removal of the not needed addr argument for the patch
introducing nouveau_dmem_migrate_copy_one, thanks,

>  static void nouveau_dmem_migrate_chunk(struct migrate_vma *args,
> -             struct nouveau_drm *drm, dma_addr_t *dma_addrs)
> +             struct nouveau_drm *drm, dma_addr_t *dma_addrs, u64 *pfns)
>  {
>       struct nouveau_fence *fence;
>       unsigned long addr = args->start, nr_dma = 0, i;
>  
>       for (i = 0; addr < args->end; i++) {
>               args->dst[i] = nouveau_dmem_migrate_copy_one(drm, args->vma,
> -                             addr, args->src[i], &dma_addrs[nr_dma]);
> +                             args->src[i], &dma_addrs[nr_dma], &pfns[i]);

Nit: I find the &pfns[i] way to pass the argument a little weird to read.
Why not "pfns + i"?

> +u64 *
> +nouveau_pfns_alloc(unsigned long npages)
> +{
> +     struct nouveau_pfnmap_args *args;
> +
> +     args = kzalloc(sizeof(*args) + npages * sizeof(args->p.phys[0]),

Can we use struct_size here?

> +     int ret;
> +
> +     if (!svm)
> +             return;
> +
> +     mutex_lock(&svm->mutex);
> +     svmm = nouveau_find_svmm(svm, mm);
> +     if (!svmm) {
> +             mutex_unlock(&svm->mutex);
> +             return;
> +     }
> +     mutex_unlock(&svm->mutex);

Given that nouveau_find_svmm doesn't take any kind of reference, what
gurantees svmm doesn't go away after dropping the lock?

> @@ -44,5 +49,19 @@ static inline int nouveau_svmm_bind(struct drm_device 
> *device, void *p,
>  {
>       return -ENOSYS;
>  }
> +
> +u64 *nouveau_pfns_alloc(unsigned long npages)
> +{
> +     return NULL;
> +}
> +
> +void nouveau_pfns_free(u64 *pfns)
> +{
> +}
> +
> +void nouveau_pfns_map(struct nouveau_drm *drm, struct mm_struct *mm,
> +                   unsigned long addr, u64 *pfns, unsigned long npages)
> +{
> +}
>  #endif /* IS_ENABLED(CONFIG_DRM_NOUVEAU_SVM) */

nouveau_dmem.c and nouveau_svm.c are both built conditional on
CONFIG_DRM_NOUVEAU_SVM, so there is no need for stubs here.
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Reply via email to