Am 11.09.2018 um 11:50 schrieb Huang Rui:
On Sun, Sep 09, 2018 at 08:03:29PM +0200, Christian König wrote:
Try to allocate VRAM in power of two sizes and only fallback to vram
split sizes if that fails.

Signed-off-by: Christian König <christian.koe...@amd.com>
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c | 52 +++++++++++++++++++++-------
  1 file changed, 40 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
index 9cfa8a9ada92..3f9d5d00c9b3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
@@ -124,6 +124,28 @@ u64 amdgpu_vram_mgr_bo_visible_size(struct amdgpu_bo *bo)
        return usage;
  }
+/**
+ * amdgpu_vram_mgr_virt_start - update virtual start address
+ *
+ * @mem: ttm_mem_reg to update
+ * @node: just allocated node
+ *
+ * Calculate a virtual BO start address to easily check if everything is CPU
+ * accessible.
+ */
+static void amdgpu_vram_mgr_virt_start(struct ttm_mem_reg *mem,
+                                      struct drm_mm_node *node)
+{
+       unsigned long start;
+
+       start = node->start + node->size;
+       if (start > mem->num_pages)
+               start -= mem->num_pages;
+       else
+               start = 0;
+       mem->start = max(mem->start, start);
+}
+
How about move this function into ttm (ttm_bo_manager.c) then export the
symbol, it looks more like a ttm helper function.

No, that is a completely amdgpu specific hack.

We just provide a virtual start address so that the page fault function can quickly check if the BO is in CPU visible VRAM or not.

Regards,
Christian.


Thanks,
Ray

  /**
   * amdgpu_vram_mgr_new - allocate new ranges
   *
@@ -176,10 +198,25 @@ static int amdgpu_vram_mgr_new(struct 
ttm_mem_type_manager *man,
        pages_left = mem->num_pages;
spin_lock(&mgr->lock);
-       for (i = 0; i < num_nodes; ++i) {
+       for (i = 0; pages_left >= pages_per_node; ++i) {
+               unsigned long pages = rounddown_pow_of_two(pages_left);
+
+               r = drm_mm_insert_node_in_range(mm, &nodes[i], pages,
+                                               pages_per_node, 0,
+                                               place->fpfn, lpfn,
+                                               mode);
+               if (unlikely(r))
+                       break;
+
+               usage += nodes[i].size << PAGE_SHIFT;
+               vis_usage += amdgpu_vram_mgr_vis_size(adev, &nodes[i]);
+               amdgpu_vram_mgr_virt_start(mem, &nodes[i]);
+               pages_left -= pages;
+       }
+
+       for (; pages_left; ++i) {
                unsigned long pages = min(pages_left, pages_per_node);
                uint32_t alignment = mem->page_alignment;
-               unsigned long start;
if (pages == pages_per_node)
                        alignment = pages_per_node;
@@ -193,16 +230,7 @@ static int amdgpu_vram_mgr_new(struct ttm_mem_type_manager 
*man,
usage += nodes[i].size << PAGE_SHIFT;
                vis_usage += amdgpu_vram_mgr_vis_size(adev, &nodes[i]);
-
-               /* Calculate a virtual BO start address to easily check if
-                * everything is CPU accessible.
-                */
-               start = nodes[i].start + nodes[i].size;
-               if (start > mem->num_pages)
-                       start -= mem->num_pages;
-               else
-                       start = 0;
-               mem->start = max(mem->start, start);
+               amdgpu_vram_mgr_virt_start(mem, &nodes[i]);
                pages_left -= pages;
        }
        spin_unlock(&mgr->lock);
--
2.14.1

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Reply via email to