On Mon, May 19, 2025 at 2:45 PM Dave Airlie <airl...@gmail.com> wrote:
>
> On Tue, 20 May 2025 at 07:25, Rob Clark <robdcl...@gmail.com> wrote:
> >
> > On Mon, May 19, 2025 at 2:15 PM Dave Airlie <airl...@gmail.com> wrote:
> > >
> > > On Tue, 20 May 2025 at 03:54, Rob Clark <robdcl...@gmail.com> wrote:
> > > >
> > > > From: Rob Clark <robdcl...@chromium.org>
> > > >
> > > > Conversion to DRM GPU VA Manager[1], and adding support for Vulkan 
> > > > Sparse
> > > > Memory[2] in the form of:
> > > >
> > > > 1. A new VM_BIND submitqueue type for executing VM MSM_SUBMIT_BO_OP_MAP/
> > > >    MAP_NULL/UNMAP commands
> > > >
> > > > 2. A new VM_BIND ioctl to allow submitting batches of one or more
> > > >    MAP/MAP_NULL/UNMAP commands to a VM_BIND submitqueue
> > > >
> > > > I did not implement support for synchronous VM_BIND commands.  Since
> > > > userspace could just immediately wait for the `SUBMIT` to complete, I 
> > > > don't
> > > > think we need this extra complexity in the kernel.  
> > > > Synchronous/immediate
> > > > VM_BIND operations could be implemented with a 2nd VM_BIND submitqueue.
> > >
> > > This seems suboptimal for Vulkan userspaces. non-sparse binds are all
> > > synchronous, you are adding an extra ioctl to wait, or do you manage
> > > these via a different mechanism?
> >
> > Normally it's just an extra in-fence for the SUBMIT ioctl to ensure
> > the binds happen before cmd execution
> >
> > When it comes to UAPI, it's easier to add something later, than to
> > take something away, so I don't see a problem adding synchronous binds
> > later if that proves to be needed.  But I don't think it is.
>
> I'm not 100% sure that is conformant behaviour to the vulkan spec,
>
> Two questions come to mind:
> 1. where is this out fence stored? vulkan being explicit with no
> guarantee of threads doing things, seems like you'd need to use a lock
> in the vulkan driver to store it, esp if multiple threads bind memory.

turnip is protected dev->vm_bind_fence_fd with a u_rwlock

> 2. If it's fine to lazy bind on the hw side, do you also handle the
> case where something is bound and immediately freed, where does the
> fence go then, do you wait for the fence before destroying things?

right no turnip is just relying on the UNMAP/unbind going thru the
same queue.. but I guess it could also use vm_bind_fence_fd as an
in-fence.

BR,
-R

Reply via email to