Am 13.02.2018 um 00:23 schrieb Felix Kuehling:
On 2018-02-12 11:57 AM, Felix Kuehling wrote:
I just thought of a slightly different approach I would consider more
realistic, without having thought through all the details: Adding
KFD-specific memory management ioctls to the amdgpu device. Basically
call amdgpu_amdkfd_gpuvm functions from amdgpu ioctl functions instead
of KFD ioctl functions. But we'd still have KFD ioctls for other things,
and the new amdgpu ioctls would be KFD-specific and not useful for
graphics applications. It's probably still several weeks of work, but
shouldn't have major teething issues because the internal logic and
functionality would be basically unchanged. It would just move the
ioctls from one device to another.
My thinking went into a similar direction. But instead of exposing the
KFD IOCTLs through the DRM FD, I would let the KFD import a DRM FD.

And then use the DRM FD in the KFD for things like the buffer provider
of a device, e.g. no separate IDR for BOs in the KFD but rather a
reference to the DRM FD.
I'm not sure I understand this. With "DRM FD" I think you mean the
device file descriptor. Then have KFD call file operation on the FD to
call AMDGPU ioctls? Is there any precedent for this that I can look at
for an example?
OK, I think this could work for finding a VM from a DRM file descriptor:

     struct file *filp = fget(fd);
     struct drm_file *drm_priv = filp->private_data;
     struct amdgpu_fpriv *drv_priv = file_priv->driver_priv;
     struct amdgpu_vm *vm = &drv_priv->vm;

That would let us use the DRM VM instead of creating our own and would
avoid wasting a perfectly good VM that gets created by opening the DRM
device. We'd need a new ioctl to import VMs into KFD. But that's about
as far as I would take it.

We'd still need to add KFD-specific data to the VM. If we use an
existing VM, we can't wrap it in our own structure any more. We'd need
to come up with a different solution or extend struct amdgpu_vm with
what we need.

Well feel free to extend the VM structure with a pointer for KFD data.

And we still need our own BO creation ioctl. Importing DMABufs adds
extra overhead (more system calls, file descriptors, GEM objects) and
doesn't cover some of our use cases (userptr). We also need our own idr
for managing buffer handles that are used with all our memory and VM
management functions.

Yeah, well that is the second part of that idea.

When you have the drm_file for the VM, you can also use it to call drm_gem_object_lookup().

And when you need to iterate over all the BOs you can just use the object_idr or the VM structure as well.

Regards,
Christian.


Regards,
   Felix


Or do you mean a DMABuf FD exported by GEM? After importing the buffer
and closing the FD, we still need a way to refer to the buffer for
managing the GPU mappings and for releasing the reference. So we still
need handles that are currently provided by our own IDR. I think we'll
still need that.

Finally, the kfd_ioctl_alloc_memory_of_gpu will later also be used for
creating userptr BOs. Those can't be exported as DMABufs, so we still
need our own ioctl for creating that type of BO. That was going to be my
next patch series.

Regards,
   Felix


_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Reply via email to