Rob Herring <r...@kernel.org> writes: > On Thu, Mar 14, 2019 at 3:45 PM Dave Airlie <airl...@gmail.com> wrote: >> >> On Thu, 14 Feb 2019 at 19:12, Christian König via dri-devel >> <dri-devel@lists.freedesktop.org> wrote: >> > >> > Am 14.02.19 um 03:52 schrieb Alex Deucher via dri-devel: >> > > [SNIP] >> > >>>>> +static int lima_ioctl_gem_va(struct drm_device *dev, void *data, >> > >>>>> struct drm_file *file) >> > >>>>> +{ >> > >>>>> + struct drm_lima_gem_va *args = data; >> > >>>>> + >> > >>>>> + switch (args->op) { >> > >>>>> + case LIMA_VA_OP_MAP: >> > >>>>> + return lima_gem_va_map(file, args->handle, >> > >>>>> args->flags, args->va); >> > >>>>> + case LIMA_VA_OP_UNMAP: >> > >>>>> + return lima_gem_va_unmap(file, args->handle, >> > >>>>> args->va); >> > >>>> These are mapping to GPU VA. Why not do that on GEM object creation or >> > >>>> import or when the objects are submitted with cmd queue as other >> > >>>> drivers do? >> > >>>> >> > >>>> To put it another way, These ioctls look different than what other >> > >>>> drivers do. Why do you need to do things differently? My understanding >> > >>>> is best practice is to map and return the GPU offset when the GEM >> > >>>> object is created. This is what v3d does. I think Intel is moving to >> > >>>> that. And panfrost will do that. >> > >>> I think it would be a good idea to look at the amdgpu driver. This >> > >>> driver is heavily modeled after it. Basically the GEM VA ioctl allows >> > >>> userspace to manage per process (per fd really) virtual addresses. >> > >> Why do you want userspace to manage assigning VAs versus the kernel to >> > >> do so? Exposing that detail to userspace means the driver must support >> > >> a per process address space. Letting the kernel assign addresses means >> > >> it can either be a single address space or be a per process address >> > >> space. It seems to me more flexible to allow the kernel driver to >> > >> evolve without that ABI. >> > > Having it in userspace provides a lot more flexibility and makes it >> > > easier to support things like unified address space between CPU and >> > > GPU. I guess it depends on the hw as to what is the right choice. >> > >> > To summarize we actually have tried this approach with the radeon and it >> > turned out to be a really bad mistake. >> > >> > To implement features like partial residential textures and shared >> > virtual address space you absolutely need userspace to be in charge of >> > allocating virtual addresses. >> > >> >> I think for lima not having this is fine, but for panfrost it really >> should have it. >> >> If you can implement vulkan you probably want this, nouveau hasn't a >> vulkan driver because of exactly this problem in their uapi, so maybe >> adjust panfrost to do user-space managed vma. > > Wouldn't this just require an allocation flag to not map the BO up > front and then new ioctl's like above to map and unmap at specified > VAs? Seems like we could add that when we get there.
Sounds pretty reasonable to me.
signature.asc
Description: PGP signature
_______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel