Re: [Nouveau] [PATCH 1/2] drm/nouveau: move io_reserve_lru handling into the driver v2

2020-02-13 Thread Christian König

Hi Ben,

sorry for the delayed response. Haven been rather busy recently.

Am 28.01.20 um 06:49 schrieb Ben Skeggs:

On Sat, 25 Jan 2020 at 00:30, Christian König
 wrote:

From: Christian König 

While working on TTM cleanups I've found that the io_reserve_lru used by
Nouveau is actually not working at all.

In general we should remove driver specific handling from the memory
management, so this patch moves the io_reserve_lru handling into Nouveau
instead.

The patch should be functional correct, but is only compile tested!

NACK on this as it currently stands.  It not only causes invalid io
accesses somehow on module load, but while attempting to track down
why, I realised there's a more severe issue.  This removes the
distinction between kmap() and mapping into userspace, the former of
which should not be placed onto the LRU as an eviction candidate.


Yeah, I already feared that this won't work of hand. It's just a bit 
hard to write stuff without being able to test it.



We *do* require the LRU, so it's not something that can just be
dropped completely.  There's a user report where they're getting a
SIGBUS due to the bug you noticed causing it to not work right now.


Well that you require an LRU is obvious. But since this is completely 
Nouveau specificthat handling doesn't needs to be and actually should 
not be in TTM.


And as Daniel pointed out the current handling in the combination of TTM 
and Nouveau is fundamentally broken.


Userspace can either always force other applications into a SIGBUS or 
the page faulting into a live lock. Both of that is rather bad.



To outline how this could work correctly:

1. You split up your PCIe BAR into pages/segments of fixed size. I'm not 
sure what the NVidia hardware can handle, but I would use something like 
2MB segments for that to match the x86 huge page size.


2. Whenever you get a page fault or a kmap request for a buffer you walk 
your LRU and grab a free segment.


3. kmap requests obviously are not put back on the LRU (or are skipped 
while grabbed). Segments for page fault requests are put on the LRU 
immediately again.



The idea here is that you have at least enough segments (in this example 
256 MB BAR / 2 MB segment size = 128 segments) so that you always have 
more segments than page faults can happen at the same time.


I'm completely fine with keeping the code around for now, but as 
outlined above the whole handling is rather broken.


Regards,
Christian.



Ben.


v2: don't call ttm_bo_unmap_virtual in nouveau_ttm_io_mem_reserve

Signed-off-by: Christian König 
Acked-by: Daniel Vetter 
---
  drivers/gpu/drm/nouveau/nouveau_bo.c  | 107 --
  drivers/gpu/drm/nouveau/nouveau_bo.h  |   3 +
  drivers/gpu/drm/nouveau/nouveau_drv.h |   2 +
  drivers/gpu/drm/nouveau/nouveau_ttm.c |  43 ++-
  4 files changed, 131 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c 
b/drivers/gpu/drm/nouveau/nouveau_bo.c
index 81668104595f..acee054f77ed 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
@@ -137,6 +137,7 @@ nouveau_bo_del_ttm(struct ttm_buffer_object *bo)
 struct nouveau_bo *nvbo = nouveau_bo(bo);

 WARN_ON(nvbo->pin_refcnt > 0);
+   nouveau_bo_del_io_reserve_lru(bo);
 nv10_bo_put_tile_region(dev, nvbo->tile, NULL);

 /*
@@ -304,6 +305,7 @@ nouveau_bo_init(struct nouveau_bo *nvbo, u64 size, int 
align, u32 flags,

 nvbo->bo.mem.num_pages = size >> PAGE_SHIFT;
 nouveau_bo_placement_set(nvbo, flags, 0);
+   INIT_LIST_HEAD(>io_reserve_lru);

 ret = ttm_bo_init(nvbo->bo.bdev, >bo, size, type,
   >placement, align >> PAGE_SHIFT, false,
@@ -574,6 +576,26 @@ nouveau_bo_sync_for_cpu(struct nouveau_bo *nvbo)
 PAGE_SIZE, DMA_FROM_DEVICE);
  }

+void nouveau_bo_add_io_reserve_lru(struct ttm_buffer_object *bo)
+{
+   struct nouveau_drm *drm = nouveau_bdev(bo->bdev);
+   struct nouveau_bo *nvbo = nouveau_bo(bo);
+
+   mutex_lock(>ttm.io_reserve_mutex);
+   list_move_tail(>io_reserve_lru, >ttm.io_reserve_lru);
+   mutex_unlock(>ttm.io_reserve_mutex);
+}
+
+void nouveau_bo_del_io_reserve_lru(struct ttm_buffer_object *bo)
+{
+   struct nouveau_drm *drm = nouveau_bdev(bo->bdev);
+   struct nouveau_bo *nvbo = nouveau_bo(bo);
+
+   mutex_lock(>ttm.io_reserve_mutex);
+   list_del_init(>io_reserve_lru);
+   mutex_unlock(>ttm.io_reserve_mutex);
+}
+
  int
  nouveau_bo_validate(struct nouveau_bo *nvbo, bool interruptible,
 bool no_wait_gpu)
@@ -675,8 +697,6 @@ nouveau_bo_init_mem_type(struct ttm_bo_device *bdev, 
uint32_t type,
 }

 man->func = _vram_manager;
-   man->io_reserve_fastpath = false;
-   man->use_io_reserve_lru = true;
 } else {
 man->func = 

Re: [Nouveau] [PATCH 1/2] drm/nouveau: move io_reserve_lru handling into the driver v2

2020-01-27 Thread Ben Skeggs
On Sat, 25 Jan 2020 at 00:30, Christian König
 wrote:
>
> From: Christian König 
>
> While working on TTM cleanups I've found that the io_reserve_lru used by
> Nouveau is actually not working at all.
>
> In general we should remove driver specific handling from the memory
> management, so this patch moves the io_reserve_lru handling into Nouveau
> instead.
>
> The patch should be functional correct, but is only compile tested!
NACK on this as it currently stands.  It not only causes invalid io
accesses somehow on module load, but while attempting to track down
why, I realised there's a more severe issue.  This removes the
distinction between kmap() and mapping into userspace, the former of
which should not be placed onto the LRU as an eviction candidate.

We *do* require the LRU, so it's not something that can just be
dropped completely.  There's a user report where they're getting a
SIGBUS due to the bug you noticed causing it to not work right now.

Ben.

>
> v2: don't call ttm_bo_unmap_virtual in nouveau_ttm_io_mem_reserve
>
> Signed-off-by: Christian König 
> Acked-by: Daniel Vetter 
> ---
>  drivers/gpu/drm/nouveau/nouveau_bo.c  | 107 --
>  drivers/gpu/drm/nouveau/nouveau_bo.h  |   3 +
>  drivers/gpu/drm/nouveau/nouveau_drv.h |   2 +
>  drivers/gpu/drm/nouveau/nouveau_ttm.c |  43 ++-
>  4 files changed, 131 insertions(+), 24 deletions(-)
>
> diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c 
> b/drivers/gpu/drm/nouveau/nouveau_bo.c
> index 81668104595f..acee054f77ed 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_bo.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
> @@ -137,6 +137,7 @@ nouveau_bo_del_ttm(struct ttm_buffer_object *bo)
> struct nouveau_bo *nvbo = nouveau_bo(bo);
>
> WARN_ON(nvbo->pin_refcnt > 0);
> +   nouveau_bo_del_io_reserve_lru(bo);
> nv10_bo_put_tile_region(dev, nvbo->tile, NULL);
>
> /*
> @@ -304,6 +305,7 @@ nouveau_bo_init(struct nouveau_bo *nvbo, u64 size, int 
> align, u32 flags,
>
> nvbo->bo.mem.num_pages = size >> PAGE_SHIFT;
> nouveau_bo_placement_set(nvbo, flags, 0);
> +   INIT_LIST_HEAD(>io_reserve_lru);
>
> ret = ttm_bo_init(nvbo->bo.bdev, >bo, size, type,
>   >placement, align >> PAGE_SHIFT, false,
> @@ -574,6 +576,26 @@ nouveau_bo_sync_for_cpu(struct nouveau_bo *nvbo)
> PAGE_SIZE, DMA_FROM_DEVICE);
>  }
>
> +void nouveau_bo_add_io_reserve_lru(struct ttm_buffer_object *bo)
> +{
> +   struct nouveau_drm *drm = nouveau_bdev(bo->bdev);
> +   struct nouveau_bo *nvbo = nouveau_bo(bo);
> +
> +   mutex_lock(>ttm.io_reserve_mutex);
> +   list_move_tail(>io_reserve_lru, >ttm.io_reserve_lru);
> +   mutex_unlock(>ttm.io_reserve_mutex);
> +}
> +
> +void nouveau_bo_del_io_reserve_lru(struct ttm_buffer_object *bo)
> +{
> +   struct nouveau_drm *drm = nouveau_bdev(bo->bdev);
> +   struct nouveau_bo *nvbo = nouveau_bo(bo);
> +
> +   mutex_lock(>ttm.io_reserve_mutex);
> +   list_del_init(>io_reserve_lru);
> +   mutex_unlock(>ttm.io_reserve_mutex);
> +}
> +
>  int
>  nouveau_bo_validate(struct nouveau_bo *nvbo, bool interruptible,
> bool no_wait_gpu)
> @@ -675,8 +697,6 @@ nouveau_bo_init_mem_type(struct ttm_bo_device *bdev, 
> uint32_t type,
> }
>
> man->func = _vram_manager;
> -   man->io_reserve_fastpath = false;
> -   man->use_io_reserve_lru = true;
> } else {
> man->func = _bo_manager_func;
> }
> @@ -1305,6 +1325,8 @@ nouveau_bo_move_ntfy(struct ttm_buffer_object *bo, bool 
> evict,
> if (bo->destroy != nouveau_bo_del_ttm)
> return;
>
> +   nouveau_bo_del_io_reserve_lru(bo);
> +
> if (mem && new_reg->mem_type != TTM_PL_SYSTEM &&
> mem->mem.page == nvbo->page) {
> list_for_each_entry(vma, >vma_list, head) {
> @@ -1427,6 +1449,30 @@ nouveau_bo_verify_access(struct ttm_buffer_object *bo, 
> struct file *filp)
>   filp->private_data);
>  }
>
> +static void
> +nouveau_ttm_io_mem_free_locked(struct nouveau_drm *drm, struct ttm_mem_reg 
> *reg)
> +{
> +   struct nouveau_mem *mem = nouveau_mem(reg);
> +
> +   if (!reg->bus.base)
> +   return; /* already freed */
> +
> +   if (drm->client.mem->oclass >= NVIF_CLASS_MEM_NV50) {
> +   switch (reg->mem_type) {
> +   case TTM_PL_TT:
> +   if (mem->kind)
> +   nvif_object_unmap_handle(>mem.object);
> +   break;
> +   case TTM_PL_VRAM:
> +   nvif_object_unmap_handle(>mem.object);
> +   break;
> +   default:
> +   break;
> +   }
> +   }
> +   reg->bus.base = 0;
> +}
> +
>