Hello, Pinging again re: review and also was asking if we can revert the select_page_shift() handling back to v1 behavior with a fall-back path, as it looks like there are some cases where nouveau_bo_fixup_align() isn't enough; https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/36450#note_3159199.
Thanks! On Fri, Oct 10, 2025 at 2:39 AM Mohamed Ahmed <[email protected]> wrote: > > From: Mary Guillemard <[email protected]> > > Now that everything in UVMM knows about the variable page shift, we can > select larger values. > > The proposed approach relies on nouveau_bo::page unless if it would cause > alignment issues (in which case we fall back to searching for an > appropriate shift) > > Signed-off-by: Mary Guillemard <[email protected]> > Co-developed-by: Mohamed Ahmed <[email protected]> > Signed-off-by: Mohamed Ahmed <[email protected]> > --- > drivers/gpu/drm/nouveau/nouveau_uvmm.c | 29 ++++++++++++++++++++++++-- > 1 file changed, 27 insertions(+), 2 deletions(-) > > diff --git a/drivers/gpu/drm/nouveau/nouveau_uvmm.c > b/drivers/gpu/drm/nouveau/nouveau_uvmm.c > index 2cd0835b05e8..26edc60a530b 100644 > --- a/drivers/gpu/drm/nouveau/nouveau_uvmm.c > +++ b/drivers/gpu/drm/nouveau/nouveau_uvmm.c > @@ -454,6 +454,31 @@ op_unmap_prepare_unwind(struct drm_gpuva *va) > drm_gpuva_insert(va->vm, va); > } > > +static bool > +op_map_aligned_to_page_shift(const struct drm_gpuva_op_map *op, u8 > page_shift) > +{ > + u64 page_size = 1ULL << page_shift; > + > + return op->va.addr % page_size == 0 && op->va.range % page_size == 0 > && > + op->gem.offset % page_size == 0; > +} > + > +static u8 > +select_page_shift(struct nouveau_uvmm *uvmm, struct drm_gpuva_op_map *op) > +{ > + struct nouveau_bo *nvbo = nouveau_gem_object(op->gem.obj); > + > + /* nouveau_bo_fixup_align() guarantees for us that the page size will > be aligned > + * but just in case, make sure that it is aligned. > + */ > + if (op_map_aligned_to_page_shift(op, nvbo->page)) > + return nvbo->page; > + > + /* This should never happen, but raise a warning and return 4K if we > get here. */ > + WARN_ON(1); > + return PAGE_SHIFT; > +} > + > static void > nouveau_uvmm_sm_prepare_unwind(struct nouveau_uvmm *uvmm, > struct nouveau_uvma_prealloc *new, > @@ -506,7 +531,7 @@ nouveau_uvmm_sm_prepare_unwind(struct nouveau_uvmm *uvmm, > if (vmm_get_range) > nouveau_uvmm_vmm_put(uvmm, vmm_get_start, > vmm_get_range, > - PAGE_SHIFT); > + select_page_shift(uvmm, > &op->map)); > break; > } > case DRM_GPUVA_OP_REMAP: { > @@ -599,7 +624,7 @@ op_map_prepare(struct nouveau_uvmm *uvmm, > > uvma->region = args->region; > uvma->kind = args->kind; > - uvma->page_shift = PAGE_SHIFT; > + uvma->page_shift = select_page_shift(uvmm, op); > > drm_gpuva_map(&uvmm->base, &uvma->va, op); > > -- > 2.51.0 >
