On Wed, Oct 1, 2025 at 5:13 PM Boris Brezillon
<[email protected]> wrote:
>
> On Wed, 1 Oct 2025 16:42:35 +0200
> Alice Ryhl <[email protected]> wrote:
>
> > On Wed, Oct 1, 2025 at 4:01 PM Danilo Krummrich <[email protected]> wrote:
> > >
> > > On Wed Oct 1, 2025 at 12:41 PM CEST, Alice Ryhl wrote:
> > > > +/*
> > > > + * Must be called with GEM mutex held. After releasing GEM mutex,
> > > > + * drm_gpuvm_bo_defer_free_unlocked() must be called.
> > > > + */
> > > > +static void
> > > > +drm_gpuvm_bo_defer_free_locked(struct kref *kref)
> > > > +{
> > > > +     struct drm_gpuvm_bo *vm_bo = container_of(kref, struct 
> > > > drm_gpuvm_bo,
> > > > +                                               kref);
> > > > +     struct drm_gpuvm *gpuvm = vm_bo->vm;
> > > > +
> > > > +     if (!drm_gpuvm_resv_protected(gpuvm)) {
> > > > +             drm_gpuvm_bo_list_del(vm_bo, extobj, true);
> > > > +             drm_gpuvm_bo_list_del(vm_bo, evict, true);
> > > > +     }
> > > > +
> > > > +     list_del(&vm_bo->list.entry.gem);
> > > > +}
> > > > +
> > > > +/*
> > > > + * GEM mutex must not be held. Called after 
> > > > drm_gpuvm_bo_defer_free_locked().
> > > > + */
> > > > +static void
> > > > +drm_gpuvm_bo_defer_free_unlocked(struct drm_gpuvm_bo *vm_bo)
> > > > +{
> > > > +     struct drm_gpuvm *gpuvm = vm_bo->vm;
> > > > +
> > > > +     llist_add(&vm_bo->list.entry.bo_defer, &gpuvm->bo_defer);
> > > > +}
> > > > +
> > > > +static void
> > > > +drm_gpuvm_bo_defer_free(struct kref *kref)
> > > > +{
> > > > +     struct drm_gpuvm_bo *vm_bo = container_of(kref, struct 
> > > > drm_gpuvm_bo,
> > > > +                                               kref);
> > > > +
> > > > +     mutex_lock(&vm_bo->obj->gpuva.lock);
> > > > +     drm_gpuvm_bo_defer_free_locked(kref);
> > > > +     mutex_unlock(&vm_bo->obj->gpuva.lock);
> > > > +
> > > > +     /*
> > > > +      * It's important that the GEM stays alive for the duration in 
> > > > which we
> > > > +      * hold the mutex, but the instant we add the vm_bo to bo_defer,
> > > > +      * another thread might call drm_gpuvm_bo_deferred_cleanup() and 
> > > > put
> > > > +      * the GEM. Therefore, to avoid kfreeing a mutex we are holding, 
> > > > we add
> > > > +      * the vm_bo to bo_defer *after* releasing the GEM's mutex.
> > > > +      */
> > > > +     drm_gpuvm_bo_defer_free_unlocked(vm_bo);
> > > > +}
> > >
> > > So, you're splitting drm_gpuvm_bo_defer_free() into two functions, one 
> > > doing the
> > > work that is required to be called with the gpuva lock held and one that 
> > > does
> > > the work that does not require a lock, which makes perfect sense.
> > >
> > > However, the naming chosen for the two functions, i.e.
> > > drm_gpuvm_bo_defer_free_unlocked() and drm_gpuvm_bo_defer_free_locked() is
> > > confusing:
> > >
> > > What you mean semantically mean is "do part 1 with lock held" and "do 
> > > part 2
> > > without lock held", but the the chosen names suggest that both functions 
> > > are
> > > identical, with the only difference that one takes the lock internally 
> > > and the
> > > other one requires the caller to take the lock.
> > >
> > > It's probably better to name them after what they do and not what they're 
> > > part
> > > of. If you prefer the latter, that's fine with me too, but please choose 
> > > a name
> > > that makes this circumstance obvious.
> >
> > Fair point. Do you have naming suggestions? Otherwise I can name them
> > drm_gpuvm_bo_defer_free_part1() and drm_gpuvm_bo_defer_free_part2().
> > :)
>
> drm_gpuvm_bo_free_deferral_extract_locked() and
> drm_gpuvm_bo_free_deferral_enqueue()? Definitely not short names though.

With those names I have to do some additional line breaks. How about:

drm_gpuvm_bo_into_zombie()
drm_gpuvm_bo_defer_zombie()

leaning on the zombie terminology I already added for the
drm_gpuvm_bo_is_zombie() function.

Alice

Reply via email to