On Fri, Mar 28, 2025 at 7:48 AM Danilo Krummrich <[email protected]> wrote: > > On Thu, Mar 27, 2025 at 03:01:54PM -0400, M Henning wrote: > > On Thu, Mar 27, 2025 at 9:58 AM Danilo Krummrich <[email protected]> wrote: > > > > > > On Fri, Mar 21, 2025 at 07:00:57PM -0400, M Henning wrote: > > > > This is a pointer in the gpu's virtual address space. It must be > > > > aligned according to ctxsw_align and be at least ctxsw_size bytes > > > > (where those values come from the nouveau_abi16_ioctl_get_zcull_info > > > > structure). I'll change the description to say that much. > > > > > > > > Yes, this is GEM-backed. I'm actually not entirely sure what the > > > > requirements are here, since this part is reverse-engineered. I think > > > > NOUVEAU_GEM_DOMAIN_VRAM and NOUVEAU_GEM_DOMAIN_GART are both okay. The > > > > proprietary driver allocates this buffer using > > > > NV_ESC_RM_VID_HEAP_CONTROL and sets attr = NVOS32_ATTR_LOCATION_ANY | > > > > NVOS32_ATTR_PAGE_SIZE_BIG | NVOS32_ATTR_PHYSICALITY_CONTIGUOUS, attr2 > > > > = NVOS32_ATTR2_GPU_CACHEABLE_YES | NVOS32_ATTR2_ZBC_PREFER_NO_ZBC. > > > > > > (Please do not top post.) > > > > > > What I mean is how do you map the backing GEM into the GPU's virtual > > > address > > > space? Since it's bound to a channel, I assume that it must be ensured > > > it's > > > properly mapped when work is pushed to the channel. Is it mapped through > > > VM_BIND? > > > > Yes. The userspace code for this is here: > > https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/33861/diffs?commit_id=0c4baab863730f9fc8b417834ffcbb400f11d617 > > It calls into the usual function for driver internal allocations > > (nvkmd_dev_alloc_mem) which calls VM_BIND internally. > > BOs mapped through VM_BIND are prone to eviction, is this a problem here, or > is > it fine if it is only ensured that this mapping is valid for the duration of > subsequent EXEC jobs?
I don't see the proprietary driver doing anything special related to eviction in traces, which is to say that I think it's fine for it to just be valid for subsequent EXEC jobs. That being said, I don't have the deepest understanding of how memory mapping works in open-gpu-kernel-modules so I might have missed something. Is there a good way to test this code path? Can I eg. force the kernel to evict everything in order to test that the context switching still works? > Does the mapping need to be valid when DRM_NOUVEAU_SET_ZCULL_CTXSW_BUFFER is > called? If so, how is this ensured? I don't think so. My understanding is that this call is just setting a pointer. > Can DRM_NOUVEAU_SET_ZCULL_CTXSW_BUFFER be called in between multiple > DRM_NOUVEAU_EXEC calls? Yes, there's nothing that requires a specific ordering - we can use the context normally before and after the DRM_NOUVEAU_SET_ZCULL_CTXSW_BUFFER call. > Does it maybe need an async mode, such as EXEC and VM_BIND? (To me it doesn't > seem to be the case, but those questions still need an answer.) I don't think so. Userspace calls it twice per context right now (once for init, once for teardown), so I don't expect it to be especially perf critical. > I also think we should document those things. > > > I don't understand: why is this line of questioning important? > > By sending those patches you ask me as the maintainer of the project to take > resposibility of your changes. In this case it even goes further. In fact, you > ask me to take resposibility of a new interface, which, since it is a uAPI, > can > *never* be removed in the future after being released. > > It is part of my job to act responsibly, which includes understanding what the > interface does, how it is intended to be used, whether it is sufficient for > its > purpose or if it has any flaws. Right, sorry about this - I didn't mean to question the purpose of you asking questions at all. You of course have every right to ask questions about a patch during code review. I was more just confused about why you were asking me specifically about VM_BIND, although I think that's a bit clearer to me now that you've asked questions about eviction.
