Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-20 Thread Christian König

Am 20.09.23 um 16:02 schrieb Thomas Hellström:

[SNIP]
Do you by "relocation" list refer to what gpuvm calls "evict" list 
or something else? Like the relocaton/validation list that used to 
be sent from user-space for non-VM_BIND vms?


The BOs send into the kernel with each command submission on the 
classic IOCTLs.




The vm bos plus the external/shared bos bound to the VM (the 
external list) are the bos being referenced by the current batch. So 
the bos on the VM's external list are the ones being locked and 
fenced and checked for eviction. If they weren't they could be 
evicted before the current batch completes?


That only applies to a certain use case, e.g. Vulkan or user mode 
queues.


Multimedia APIs and especially OpenGL work differently, here only the 
BOs mentioned in the relocation list are guaranteed to not be evicted.


This is intentional because those APIs tend to over allocate memory 
all the time, so for good performance you need to be able to evict 
BOs from the VM while other parts of the VM are currently in use.


Without that especially OpenGL performance would be completely 
crippled at least on amdgpu.


OK, I've always wondered how overcommiting a local VM would be handled 
on VM_BIND, where we don't have the relocation list, at least not in 
xe, so we have what you refer to as the user mode queues.


I figure those APIs that suffer from overcommitting would maintain a 
"current working set" in user-space and send changes as deltas to the 
kernel as unbinds/binds. Or at least "can be unbound / can no longer 
be unbound" advises.


This may turn out interesting.


Essentially this is how Windows used to work till (I think) Windows 8.

Basically the kernel is responsible to figure out which BO to move 
in/out of VRAM for each submission an application does. And it is 
perfectly acceptable for an application to allocate 8GiB of VRAM when 
only 4GiB is physical available.


To be honest I think it's one of the worst things every invented, but we 
somehow have to support it for some use cases.


Christian.



/Thomas




Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-20 Thread Thomas Hellström

Hi

On 9/20/23 15:48, Christian König wrote:

Am 20.09.23 um 15:38 schrieb Thomas Hellström:


On 9/20/23 15:06, Christian König wrote:



Am 20.09.23 um 14:06 schrieb Thomas Hellström:


On 9/20/23 12:51, Christian König wrote:

Am 20.09.23 um 09:44 schrieb Thomas Hellström:

Hi,

On 9/20/23 07:37, Christian König wrote:

Am 19.09.23 um 17:23 schrieb Thomas Hellström:


On 9/19/23 17:16, Danilo Krummrich wrote:

On 9/19/23 14:21, Thomas Hellström wrote:

Hi Christian

On 9/19/23 14:07, Christian König wrote:

Am 13.09.23 um 17:46 schrieb Danilo Krummrich:

On 9/13/23 17:33, Christian König wrote:

Am 13.09.23 um 17:15 schrieb Danilo Krummrich:

On 9/13/23 16:26, Christian König wrote:

Am 13.09.23 um 14:16 schrieb Danilo Krummrich:
As mentioned in a different mail thread, the reply is 
based on the assumption
that we don't support anything else than GPUVM updates 
from the IOCTL.


I think that this assumption is incorrect.


Well, more precisely I should have said "don't support 
GPUVM updated from within
fence signaling critical sections". And looking at the 
code, that doesn't seem what

you're doing there.



Vulkan is just once specific use case, but this here 
should probably be able to handle other use cases as well.


Especially with HMM you get the requirement that you 
need to be able to invalidate GPUVM mappings without 
grabbing a reservation lock.


What do you mean with "invalidate GPUVM mappings" in this 
context? drm_gpuvm_bo_evict()
should only be called from a ttm_device_funcs::move 
callback, we should hold the dma-resv

lock there.


Well the question is which dma-resv lock do we hold?

In the move callback we only hold the dma-resv lock of the 
BO which is moved, but when that is a shared BO then 
that's not the same as the one for the VM.


Correct, Thomas' idea was to use the GEM's dma_resv lock to 
protect drm_gpuvm_bo::evicted
and then actually move the drm_gpuvm_bo to the VM's evicted 
list once we grabbed all
dma-resv locks when locking the VM's BOs using drm_exec. We 
can remove them from the evicted
list on validate(). This way we never touch the evicted 
list without holding at least the VM's

dma-resv lock.

Do you have any concerns about that?


Scratching my head a bit how that is supposed to work.

This implies that you go over all the evicted BOs during 
validation and not just the one mentioned in the CS.


That might work for Vulkan, but is pretty much a no-go for 
OpenGL.










See what the eviction lock in amdgpu is doing for example.


The eviction_lock seems to protect a VM state "evicting" 
of whether any BO that
is associated with the VM is currently evicting. At the 
same time amdgpu protects
the eviceted list of the VM with a different lock. So 
this seems to be entirely
unrelated. Tracking a "currently evicting" state is not 
part of the GPUVM
implementation currently and hence nothing would change 
for amdgpu there.


Sorry for the confusion we use different terminology in 
amdgpu.


The eviction lock and evicted state is for the VM page 
tables, e.g. if the whole VM is currently not used and 
swapped out or even de-allocated.


This is necessary because we have cases where we need to 
access the VM data without holding the dma-resv lock of 
this VM. Especially figuring out which parts of an address 
space contain mappings and which doesn't.


I think this is fine, this has nothing to do with lists of 
evicted GEM objects or external GEM
objects, right? Marking mappings (drm_gpuva) as invalidated 
(DRM_GPUVA_INVALIDATED) or accessing

the VA space does not require any dma-resv locks.


I hope so, but I'm not 100% sure.





This is a requirement which comes with HMM handling, you 
won't see this with Vulkan (or OpenGL, VAAPI etc..).



The invalidation lock on the other hand is what in this 
discussion is called eviction lock. This one is needed 
because what I wrote above, during the move callback only 
the dma-resv of the BO which is moved is locked, but not 
necessarily the dma-resv of the VM.


That's yet another thing, right? This is used to track 
whether *any* BO that belongs to the VM is
currently being evicted, correct? As mentioned, as by now 
this is not supported in GPUVM and hence
would be the same driver specific code with the same driver 
specifc lock.


That is most likely a show stopper using this for OpenGL 
based workloads as far as I can see. For those you need to 
able to figure out which non-VM BOs have been evicted and 
which parts of the VM needs updates.


We identify those with a bool in the gpuvm_bo, and that bool 
is protected by the bo_resv. In essence, the "evicted" list 
must be made up-to-date with all relevant locks held before 
traversing in the next exec.


What I still miss with this idea is how do we find all the 
drm_gpuvm_bo structures with the evicted bool set to true? 
When doing the drm_exec dance we come across all external ones 
and can add them to the list if needed, but what about the BOs 
having the VM's 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-20 Thread Christian König

Am 20.09.23 um 15:38 schrieb Thomas Hellström:


On 9/20/23 15:06, Christian König wrote:



Am 20.09.23 um 14:06 schrieb Thomas Hellström:


On 9/20/23 12:51, Christian König wrote:

Am 20.09.23 um 09:44 schrieb Thomas Hellström:

Hi,

On 9/20/23 07:37, Christian König wrote:

Am 19.09.23 um 17:23 schrieb Thomas Hellström:


On 9/19/23 17:16, Danilo Krummrich wrote:

On 9/19/23 14:21, Thomas Hellström wrote:

Hi Christian

On 9/19/23 14:07, Christian König wrote:

Am 13.09.23 um 17:46 schrieb Danilo Krummrich:

On 9/13/23 17:33, Christian König wrote:

Am 13.09.23 um 17:15 schrieb Danilo Krummrich:

On 9/13/23 16:26, Christian König wrote:

Am 13.09.23 um 14:16 schrieb Danilo Krummrich:
As mentioned in a different mail thread, the reply is 
based on the assumption
that we don't support anything else than GPUVM updates 
from the IOCTL.


I think that this assumption is incorrect.


Well, more precisely I should have said "don't support 
GPUVM updated from within
fence signaling critical sections". And looking at the 
code, that doesn't seem what

you're doing there.



Vulkan is just once specific use case, but this here 
should probably be able to handle other use cases as well.


Especially with HMM you get the requirement that you need 
to be able to invalidate GPUVM mappings without grabbing 
a reservation lock.


What do you mean with "invalidate GPUVM mappings" in this 
context? drm_gpuvm_bo_evict()
should only be called from a ttm_device_funcs::move 
callback, we should hold the dma-resv

lock there.


Well the question is which dma-resv lock do we hold?

In the move callback we only hold the dma-resv lock of the 
BO which is moved, but when that is a shared BO then that's 
not the same as the one for the VM.


Correct, Thomas' idea was to use the GEM's dma_resv lock to 
protect drm_gpuvm_bo::evicted
and then actually move the drm_gpuvm_bo to the VM's evicted 
list once we grabbed all
dma-resv locks when locking the VM's BOs using drm_exec. We 
can remove them from the evicted
list on validate(). This way we never touch the evicted list 
without holding at least the VM's

dma-resv lock.

Do you have any concerns about that?


Scratching my head a bit how that is supposed to work.

This implies that you go over all the evicted BOs during 
validation and not just the one mentioned in the CS.


That might work for Vulkan, but is pretty much a no-go for 
OpenGL.










See what the eviction lock in amdgpu is doing for example.


The eviction_lock seems to protect a VM state "evicting" 
of whether any BO that
is associated with the VM is currently evicting. At the 
same time amdgpu protects
the eviceted list of the VM with a different lock. So this 
seems to be entirely
unrelated. Tracking a "currently evicting" state is not 
part of the GPUVM
implementation currently and hence nothing would change 
for amdgpu there.


Sorry for the confusion we use different terminology in 
amdgpu.


The eviction lock and evicted state is for the VM page 
tables, e.g. if the whole VM is currently not used and 
swapped out or even de-allocated.


This is necessary because we have cases where we need to 
access the VM data without holding the dma-resv lock of 
this VM. Especially figuring out which parts of an address 
space contain mappings and which doesn't.


I think this is fine, this has nothing to do with lists of 
evicted GEM objects or external GEM
objects, right? Marking mappings (drm_gpuva) as invalidated 
(DRM_GPUVA_INVALIDATED) or accessing

the VA space does not require any dma-resv locks.


I hope so, but I'm not 100% sure.





This is a requirement which comes with HMM handling, you 
won't see this with Vulkan (or OpenGL, VAAPI etc..).



The invalidation lock on the other hand is what in this 
discussion is called eviction lock. This one is needed 
because what I wrote above, during the move callback only 
the dma-resv of the BO which is moved is locked, but not 
necessarily the dma-resv of the VM.


That's yet another thing, right? This is used to track 
whether *any* BO that belongs to the VM is
currently being evicted, correct? As mentioned, as by now 
this is not supported in GPUVM and hence
would be the same driver specific code with the same driver 
specifc lock.


That is most likely a show stopper using this for OpenGL 
based workloads as far as I can see. For those you need to 
able to figure out which non-VM BOs have been evicted and 
which parts of the VM needs updates.


We identify those with a bool in the gpuvm_bo, and that bool 
is protected by the bo_resv. In essence, the "evicted" list 
must be made up-to-date with all relevant locks held before 
traversing in the next exec.


What I still miss with this idea is how do we find all the 
drm_gpuvm_bo structures with the evicted bool set to true? When 
doing the drm_exec dance we come across all external ones and 
can add them to the list if needed, but what about the BOs 
having the VM's dma-resv?


Oh, they can be added to the evict 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-20 Thread Thomas Hellström



On 9/20/23 15:06, Christian König wrote:



Am 20.09.23 um 14:06 schrieb Thomas Hellström:


On 9/20/23 12:51, Christian König wrote:

Am 20.09.23 um 09:44 schrieb Thomas Hellström:

Hi,

On 9/20/23 07:37, Christian König wrote:

Am 19.09.23 um 17:23 schrieb Thomas Hellström:


On 9/19/23 17:16, Danilo Krummrich wrote:

On 9/19/23 14:21, Thomas Hellström wrote:

Hi Christian

On 9/19/23 14:07, Christian König wrote:

Am 13.09.23 um 17:46 schrieb Danilo Krummrich:

On 9/13/23 17:33, Christian König wrote:

Am 13.09.23 um 17:15 schrieb Danilo Krummrich:

On 9/13/23 16:26, Christian König wrote:

Am 13.09.23 um 14:16 schrieb Danilo Krummrich:
As mentioned in a different mail thread, the reply is 
based on the assumption
that we don't support anything else than GPUVM updates 
from the IOCTL.


I think that this assumption is incorrect.


Well, more precisely I should have said "don't support 
GPUVM updated from within
fence signaling critical sections". And looking at the 
code, that doesn't seem what

you're doing there.



Vulkan is just once specific use case, but this here 
should probably be able to handle other use cases as well.


Especially with HMM you get the requirement that you need 
to be able to invalidate GPUVM mappings without grabbing a 
reservation lock.


What do you mean with "invalidate GPUVM mappings" in this 
context? drm_gpuvm_bo_evict()
should only be called from a ttm_device_funcs::move 
callback, we should hold the dma-resv

lock there.


Well the question is which dma-resv lock do we hold?

In the move callback we only hold the dma-resv lock of the 
BO which is moved, but when that is a shared BO then that's 
not the same as the one for the VM.


Correct, Thomas' idea was to use the GEM's dma_resv lock to 
protect drm_gpuvm_bo::evicted
and then actually move the drm_gpuvm_bo to the VM's evicted 
list once we grabbed all
dma-resv locks when locking the VM's BOs using drm_exec. We 
can remove them from the evicted
list on validate(). This way we never touch the evicted list 
without holding at least the VM's

dma-resv lock.

Do you have any concerns about that?


Scratching my head a bit how that is supposed to work.

This implies that you go over all the evicted BOs during 
validation and not just the one mentioned in the CS.


That might work for Vulkan, but is pretty much a no-go for 
OpenGL.










See what the eviction lock in amdgpu is doing for example.


The eviction_lock seems to protect a VM state "evicting" of 
whether any BO that
is associated with the VM is currently evicting. At the 
same time amdgpu protects
the eviceted list of the VM with a different lock. So this 
seems to be entirely
unrelated. Tracking a "currently evicting" state is not 
part of the GPUVM
implementation currently and hence nothing would change for 
amdgpu there.


Sorry for the confusion we use different terminology in amdgpu.

The eviction lock and evicted state is for the VM page 
tables, e.g. if the whole VM is currently not used and 
swapped out or even de-allocated.


This is necessary because we have cases where we need to 
access the VM data without holding the dma-resv lock of this 
VM. Especially figuring out which parts of an address space 
contain mappings and which doesn't.


I think this is fine, this has nothing to do with lists of 
evicted GEM objects or external GEM
objects, right? Marking mappings (drm_gpuva) as invalidated 
(DRM_GPUVA_INVALIDATED) or accessing

the VA space does not require any dma-resv locks.


I hope so, but I'm not 100% sure.





This is a requirement which comes with HMM handling, you 
won't see this with Vulkan (or OpenGL, VAAPI etc..).



The invalidation lock on the other hand is what in this 
discussion is called eviction lock. This one is needed 
because what I wrote above, during the move callback only 
the dma-resv of the BO which is moved is locked, but not 
necessarily the dma-resv of the VM.


That's yet another thing, right? This is used to track 
whether *any* BO that belongs to the VM is
currently being evicted, correct? As mentioned, as by now 
this is not supported in GPUVM and hence
would be the same driver specific code with the same driver 
specifc lock.


That is most likely a show stopper using this for OpenGL based 
workloads as far as I can see. For those you need to able to 
figure out which non-VM BOs have been evicted and which parts 
of the VM needs updates.


We identify those with a bool in the gpuvm_bo, and that bool is 
protected by the bo_resv. In essence, the "evicted" list must 
be made up-to-date with all relevant locks held before 
traversing in the next exec.


What I still miss with this idea is how do we find all the 
drm_gpuvm_bo structures with the evicted bool set to true? When 
doing the drm_exec dance we come across all external ones and 
can add them to the list if needed, but what about the BOs 
having the VM's dma-resv?


Oh, they can be added to the evict list directly (no bool needed) 
in the eviction 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-20 Thread Christian König




Am 20.09.23 um 14:06 schrieb Thomas Hellström:


On 9/20/23 12:51, Christian König wrote:

Am 20.09.23 um 09:44 schrieb Thomas Hellström:

Hi,

On 9/20/23 07:37, Christian König wrote:

Am 19.09.23 um 17:23 schrieb Thomas Hellström:


On 9/19/23 17:16, Danilo Krummrich wrote:

On 9/19/23 14:21, Thomas Hellström wrote:

Hi Christian

On 9/19/23 14:07, Christian König wrote:

Am 13.09.23 um 17:46 schrieb Danilo Krummrich:

On 9/13/23 17:33, Christian König wrote:

Am 13.09.23 um 17:15 schrieb Danilo Krummrich:

On 9/13/23 16:26, Christian König wrote:

Am 13.09.23 um 14:16 schrieb Danilo Krummrich:
As mentioned in a different mail thread, the reply is 
based on the assumption
that we don't support anything else than GPUVM updates 
from the IOCTL.


I think that this assumption is incorrect.


Well, more precisely I should have said "don't support GPUVM 
updated from within
fence signaling critical sections". And looking at the code, 
that doesn't seem what

you're doing there.



Vulkan is just once specific use case, but this here should 
probably be able to handle other use cases as well.


Especially with HMM you get the requirement that you need 
to be able to invalidate GPUVM mappings without grabbing a 
reservation lock.


What do you mean with "invalidate GPUVM mappings" in this 
context? drm_gpuvm_bo_evict()
should only be called from a ttm_device_funcs::move 
callback, we should hold the dma-resv

lock there.


Well the question is which dma-resv lock do we hold?

In the move callback we only hold the dma-resv lock of the BO 
which is moved, but when that is a shared BO then that's not 
the same as the one for the VM.


Correct, Thomas' idea was to use the GEM's dma_resv lock to 
protect drm_gpuvm_bo::evicted
and then actually move the drm_gpuvm_bo to the VM's evicted 
list once we grabbed all
dma-resv locks when locking the VM's BOs using drm_exec. We 
can remove them from the evicted
list on validate(). This way we never touch the evicted list 
without holding at least the VM's

dma-resv lock.

Do you have any concerns about that?


Scratching my head a bit how that is supposed to work.

This implies that you go over all the evicted BOs during 
validation and not just the one mentioned in the CS.


That might work for Vulkan, but is pretty much a no-go for OpenGL.









See what the eviction lock in amdgpu is doing for example.


The eviction_lock seems to protect a VM state "evicting" of 
whether any BO that
is associated with the VM is currently evicting. At the same 
time amdgpu protects
the eviceted list of the VM with a different lock. So this 
seems to be entirely
unrelated. Tracking a "currently evicting" state is not part 
of the GPUVM
implementation currently and hence nothing would change for 
amdgpu there.


Sorry for the confusion we use different terminology in amdgpu.

The eviction lock and evicted state is for the VM page 
tables, e.g. if the whole VM is currently not used and 
swapped out or even de-allocated.


This is necessary because we have cases where we need to 
access the VM data without holding the dma-resv lock of this 
VM. Especially figuring out which parts of an address space 
contain mappings and which doesn't.


I think this is fine, this has nothing to do with lists of 
evicted GEM objects or external GEM
objects, right? Marking mappings (drm_gpuva) as invalidated 
(DRM_GPUVA_INVALIDATED) or accessing

the VA space does not require any dma-resv locks.


I hope so, but I'm not 100% sure.





This is a requirement which comes with HMM handling, you 
won't see this with Vulkan (or OpenGL, VAAPI etc..).



The invalidation lock on the other hand is what in this 
discussion is called eviction lock. This one is needed 
because what I wrote above, during the move callback only the 
dma-resv of the BO which is moved is locked, but not 
necessarily the dma-resv of the VM.


That's yet another thing, right? This is used to track whether 
*any* BO that belongs to the VM is
currently being evicted, correct? As mentioned, as by now this 
is not supported in GPUVM and hence
would be the same driver specific code with the same driver 
specifc lock.


That is most likely a show stopper using this for OpenGL based 
workloads as far as I can see. For those you need to able to 
figure out which non-VM BOs have been evicted and which parts 
of the VM needs updates.


We identify those with a bool in the gpuvm_bo, and that bool is 
protected by the bo_resv. In essence, the "evicted" list must be 
made up-to-date with all relevant locks held before traversing 
in the next exec.


What I still miss with this idea is how do we find all the 
drm_gpuvm_bo structures with the evicted bool set to true? When 
doing the drm_exec dance we come across all external ones and can 
add them to the list if needed, but what about the BOs having the 
VM's dma-resv?


Oh, they can be added to the evict list directly (no bool needed) 
in the eviction code, like in v3. Since for those we indeed 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-20 Thread Thomas Hellström



On 9/20/23 12:51, Christian König wrote:

Am 20.09.23 um 09:44 schrieb Thomas Hellström:

Hi,

On 9/20/23 07:37, Christian König wrote:

Am 19.09.23 um 17:23 schrieb Thomas Hellström:


On 9/19/23 17:16, Danilo Krummrich wrote:

On 9/19/23 14:21, Thomas Hellström wrote:

Hi Christian

On 9/19/23 14:07, Christian König wrote:

Am 13.09.23 um 17:46 schrieb Danilo Krummrich:

On 9/13/23 17:33, Christian König wrote:

Am 13.09.23 um 17:15 schrieb Danilo Krummrich:

On 9/13/23 16:26, Christian König wrote:

Am 13.09.23 um 14:16 schrieb Danilo Krummrich:
As mentioned in a different mail thread, the reply is based 
on the assumption
that we don't support anything else than GPUVM updates from 
the IOCTL.


I think that this assumption is incorrect.


Well, more precisely I should have said "don't support GPUVM 
updated from within
fence signaling critical sections". And looking at the code, 
that doesn't seem what

you're doing there.



Vulkan is just once specific use case, but this here should 
probably be able to handle other use cases as well.


Especially with HMM you get the requirement that you need to 
be able to invalidate GPUVM mappings without grabbing a 
reservation lock.


What do you mean with "invalidate GPUVM mappings" in this 
context? drm_gpuvm_bo_evict()
should only be called from a ttm_device_funcs::move 
callback, we should hold the dma-resv

lock there.


Well the question is which dma-resv lock do we hold?

In the move callback we only hold the dma-resv lock of the BO 
which is moved, but when that is a shared BO then that's not 
the same as the one for the VM.


Correct, Thomas' idea was to use the GEM's dma_resv lock to 
protect drm_gpuvm_bo::evicted
and then actually move the drm_gpuvm_bo to the VM's evicted 
list once we grabbed all
dma-resv locks when locking the VM's BOs using drm_exec. We can 
remove them from the evicted
list on validate(). This way we never touch the evicted list 
without holding at least the VM's

dma-resv lock.

Do you have any concerns about that?


Scratching my head a bit how that is supposed to work.

This implies that you go over all the evicted BOs during 
validation and not just the one mentioned in the CS.


That might work for Vulkan, but is pretty much a no-go for OpenGL.









See what the eviction lock in amdgpu is doing for example.


The eviction_lock seems to protect a VM state "evicting" of 
whether any BO that
is associated with the VM is currently evicting. At the same 
time amdgpu protects
the eviceted list of the VM with a different lock. So this 
seems to be entirely
unrelated. Tracking a "currently evicting" state is not part 
of the GPUVM
implementation currently and hence nothing would change for 
amdgpu there.


Sorry for the confusion we use different terminology in amdgpu.

The eviction lock and evicted state is for the VM page tables, 
e.g. if the whole VM is currently not used and swapped out or 
even de-allocated.


This is necessary because we have cases where we need to 
access the VM data without holding the dma-resv lock of this 
VM. Especially figuring out which parts of an address space 
contain mappings and which doesn't.


I think this is fine, this has nothing to do with lists of 
evicted GEM objects or external GEM
objects, right? Marking mappings (drm_gpuva) as invalidated 
(DRM_GPUVA_INVALIDATED) or accessing

the VA space does not require any dma-resv locks.


I hope so, but I'm not 100% sure.





This is a requirement which comes with HMM handling, you won't 
see this with Vulkan (or OpenGL, VAAPI etc..).



The invalidation lock on the other hand is what in this 
discussion is called eviction lock. This one is needed because 
what I wrote above, during the move callback only the dma-resv 
of the BO which is moved is locked, but not necessarily the 
dma-resv of the VM.


That's yet another thing, right? This is used to track whether 
*any* BO that belongs to the VM is
currently being evicted, correct? As mentioned, as by now this 
is not supported in GPUVM and hence
would be the same driver specific code with the same driver 
specifc lock.


That is most likely a show stopper using this for OpenGL based 
workloads as far as I can see. For those you need to able to 
figure out which non-VM BOs have been evicted and which parts of 
the VM needs updates.


We identify those with a bool in the gpuvm_bo, and that bool is 
protected by the bo_resv. In essence, the "evicted" list must be 
made up-to-date with all relevant locks held before traversing in 
the next exec.


What I still miss with this idea is how do we find all the 
drm_gpuvm_bo structures with the evicted bool set to true? When 
doing the drm_exec dance we come across all external ones and can 
add them to the list if needed, but what about the BOs having the 
VM's dma-resv?


Oh, they can be added to the evict list directly (no bool needed) 
in the eviction code, like in v3. Since for those we indeed hold 
the VM's dma_resv since it's aliased with the 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-20 Thread Christian König

Am 20.09.23 um 09:44 schrieb Thomas Hellström:

Hi,

On 9/20/23 07:37, Christian König wrote:

Am 19.09.23 um 17:23 schrieb Thomas Hellström:


On 9/19/23 17:16, Danilo Krummrich wrote:

On 9/19/23 14:21, Thomas Hellström wrote:

Hi Christian

On 9/19/23 14:07, Christian König wrote:

Am 13.09.23 um 17:46 schrieb Danilo Krummrich:

On 9/13/23 17:33, Christian König wrote:

Am 13.09.23 um 17:15 schrieb Danilo Krummrich:

On 9/13/23 16:26, Christian König wrote:

Am 13.09.23 um 14:16 schrieb Danilo Krummrich:
As mentioned in a different mail thread, the reply is based 
on the assumption
that we don't support anything else than GPUVM updates from 
the IOCTL.


I think that this assumption is incorrect.


Well, more precisely I should have said "don't support GPUVM 
updated from within
fence signaling critical sections". And looking at the code, 
that doesn't seem what

you're doing there.



Vulkan is just once specific use case, but this here should 
probably be able to handle other use cases as well.


Especially with HMM you get the requirement that you need to 
be able to invalidate GPUVM mappings without grabbing a 
reservation lock.


What do you mean with "invalidate GPUVM mappings" in this 
context? drm_gpuvm_bo_evict()
should only be called from a ttm_device_funcs::move callback, 
we should hold the dma-resv

lock there.


Well the question is which dma-resv lock do we hold?

In the move callback we only hold the dma-resv lock of the BO 
which is moved, but when that is a shared BO then that's not 
the same as the one for the VM.


Correct, Thomas' idea was to use the GEM's dma_resv lock to 
protect drm_gpuvm_bo::evicted
and then actually move the drm_gpuvm_bo to the VM's evicted list 
once we grabbed all
dma-resv locks when locking the VM's BOs using drm_exec. We can 
remove them from the evicted
list on validate(). This way we never touch the evicted list 
without holding at least the VM's

dma-resv lock.

Do you have any concerns about that?


Scratching my head a bit how that is supposed to work.

This implies that you go over all the evicted BOs during 
validation and not just the one mentioned in the CS.


That might work for Vulkan, but is pretty much a no-go for OpenGL.









See what the eviction lock in amdgpu is doing for example.


The eviction_lock seems to protect a VM state "evicting" of 
whether any BO that
is associated with the VM is currently evicting. At the same 
time amdgpu protects
the eviceted list of the VM with a different lock. So this 
seems to be entirely
unrelated. Tracking a "currently evicting" state is not part 
of the GPUVM
implementation currently and hence nothing would change for 
amdgpu there.


Sorry for the confusion we use different terminology in amdgpu.

The eviction lock and evicted state is for the VM page tables, 
e.g. if the whole VM is currently not used and swapped out or 
even de-allocated.


This is necessary because we have cases where we need to access 
the VM data without holding the dma-resv lock of this VM. 
Especially figuring out which parts of an address space contain 
mappings and which doesn't.


I think this is fine, this has nothing to do with lists of 
evicted GEM objects or external GEM
objects, right? Marking mappings (drm_gpuva) as invalidated 
(DRM_GPUVA_INVALIDATED) or accessing

the VA space does not require any dma-resv locks.


I hope so, but I'm not 100% sure.





This is a requirement which comes with HMM handling, you won't 
see this with Vulkan (or OpenGL, VAAPI etc..).



The invalidation lock on the other hand is what in this 
discussion is called eviction lock. This one is needed because 
what I wrote above, during the move callback only the dma-resv 
of the BO which is moved is locked, but not necessarily the 
dma-resv of the VM.


That's yet another thing, right? This is used to track whether 
*any* BO that belongs to the VM is
currently being evicted, correct? As mentioned, as by now this 
is not supported in GPUVM and hence
would be the same driver specific code with the same driver 
specifc lock.


That is most likely a show stopper using this for OpenGL based 
workloads as far as I can see. For those you need to able to 
figure out which non-VM BOs have been evicted and which parts of 
the VM needs updates.


We identify those with a bool in the gpuvm_bo, and that bool is 
protected by the bo_resv. In essence, the "evicted" list must be 
made up-to-date with all relevant locks held before traversing in 
the next exec.


What I still miss with this idea is how do we find all the 
drm_gpuvm_bo structures with the evicted bool set to true? When 
doing the drm_exec dance we come across all external ones and can 
add them to the list if needed, but what about the BOs having the 
VM's dma-resv?


Oh, they can be added to the evict list directly (no bool needed) in 
the eviction code, like in v3. Since for those we indeed hold the 
VM's dma_resv since it's aliased with the object's dma-resv.


Yeah, I wanted to note 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-20 Thread Thomas Hellström


On 9/20/23 09:44, Thomas Hellström wrote:

Hi,

On 9/20/23 07:37, Christian König wrote:

Am 19.09.23 um 17:23 schrieb Thomas Hellström:


On 9/19/23 17:16, Danilo Krummrich wrote:

On 9/19/23 14:21, Thomas Hellström wrote:

Hi Christian

On 9/19/23 14:07, Christian König wrote:

Am 13.09.23 um 17:46 schrieb Danilo Krummrich:

On 9/13/23 17:33, Christian König wrote:

Am 13.09.23 um 17:15 schrieb Danilo Krummrich:

On 9/13/23 16:26, Christian König wrote:

Am 13.09.23 um 14:16 schrieb Danilo Krummrich:
As mentioned in a different mail thread, the reply is based 
on the assumption
that we don't support anything else than GPUVM updates from 
the IOCTL.


I think that this assumption is incorrect.


Well, more precisely I should have said "don't support GPUVM 
updated from within
fence signaling critical sections". And looking at the code, 
that doesn't seem what

you're doing there.



Vulkan is just once specific use case, but this here should 
probably be able to handle other use cases as well.


Especially with HMM you get the requirement that you need to 
be able to invalidate GPUVM mappings without grabbing a 
reservation lock.


What do you mean with "invalidate GPUVM mappings" in this 
context? drm_gpuvm_bo_evict()
should only be called from a ttm_device_funcs::move callback, 
we should hold the dma-resv

lock there.


Well the question is which dma-resv lock do we hold?

In the move callback we only hold the dma-resv lock of the BO 
which is moved, but when that is a shared BO then that's not 
the same as the one for the VM.


Correct, Thomas' idea was to use the GEM's dma_resv lock to 
protect drm_gpuvm_bo::evicted
and then actually move the drm_gpuvm_bo to the VM's evicted list 
once we grabbed all
dma-resv locks when locking the VM's BOs using drm_exec. We can 
remove them from the evicted
list on validate(). This way we never touch the evicted list 
without holding at least the VM's

dma-resv lock.

Do you have any concerns about that?


Scratching my head a bit how that is supposed to work.

This implies that you go over all the evicted BOs during 
validation and not just the one mentioned in the CS.


That might work for Vulkan, but is pretty much a no-go for OpenGL.









See what the eviction lock in amdgpu is doing for example.


The eviction_lock seems to protect a VM state "evicting" of 
whether any BO that
is associated with the VM is currently evicting. At the same 
time amdgpu protects
the eviceted list of the VM with a different lock. So this 
seems to be entirely
unrelated. Tracking a "currently evicting" state is not part 
of the GPUVM
implementation currently and hence nothing would change for 
amdgpu there.


Sorry for the confusion we use different terminology in amdgpu.

The eviction lock and evicted state is for the VM page tables, 
e.g. if the whole VM is currently not used and swapped out or 
even de-allocated.


This is necessary because we have cases where we need to access 
the VM data without holding the dma-resv lock of this VM. 
Especially figuring out which parts of an address space contain 
mappings and which doesn't.


I think this is fine, this has nothing to do with lists of 
evicted GEM objects or external GEM
objects, right? Marking mappings (drm_gpuva) as invalidated 
(DRM_GPUVA_INVALIDATED) or accessing

the VA space does not require any dma-resv locks.


I hope so, but I'm not 100% sure.





This is a requirement which comes with HMM handling, you won't 
see this with Vulkan (or OpenGL, VAAPI etc..).



The invalidation lock on the other hand is what in this 
discussion is called eviction lock. This one is needed because 
what I wrote above, during the move callback only the dma-resv 
of the BO which is moved is locked, but not necessarily the 
dma-resv of the VM.


That's yet another thing, right? This is used to track whether 
*any* BO that belongs to the VM is
currently being evicted, correct? As mentioned, as by now this 
is not supported in GPUVM and hence
would be the same driver specific code with the same driver 
specifc lock.


That is most likely a show stopper using this for OpenGL based 
workloads as far as I can see. For those you need to able to 
figure out which non-VM BOs have been evicted and which parts of 
the VM needs updates.


We identify those with a bool in the gpuvm_bo, and that bool is 
protected by the bo_resv. In essence, the "evicted" list must be 
made up-to-date with all relevant locks held before traversing in 
the next exec.


What I still miss with this idea is how do we find all the 
drm_gpuvm_bo structures with the evicted bool set to true? When 
doing the drm_exec dance we come across all external ones and can 
add them to the list if needed, but what about the BOs having the 
VM's dma-resv?


Oh, they can be added to the evict list directly (no bool needed) in 
the eviction code, like in v3. Since for those we indeed hold the 
VM's dma_resv since it's aliased with the object's dma-resv.


Yeah, I wanted to note what 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-20 Thread Thomas Hellström

Hi,

On 9/20/23 07:37, Christian König wrote:

Am 19.09.23 um 17:23 schrieb Thomas Hellström:


On 9/19/23 17:16, Danilo Krummrich wrote:

On 9/19/23 14:21, Thomas Hellström wrote:

Hi Christian

On 9/19/23 14:07, Christian König wrote:

Am 13.09.23 um 17:46 schrieb Danilo Krummrich:

On 9/13/23 17:33, Christian König wrote:

Am 13.09.23 um 17:15 schrieb Danilo Krummrich:

On 9/13/23 16:26, Christian König wrote:

Am 13.09.23 um 14:16 schrieb Danilo Krummrich:
As mentioned in a different mail thread, the reply is based 
on the assumption
that we don't support anything else than GPUVM updates from 
the IOCTL.


I think that this assumption is incorrect.


Well, more precisely I should have said "don't support GPUVM 
updated from within
fence signaling critical sections". And looking at the code, 
that doesn't seem what

you're doing there.



Vulkan is just once specific use case, but this here should 
probably be able to handle other use cases as well.


Especially with HMM you get the requirement that you need to 
be able to invalidate GPUVM mappings without grabbing a 
reservation lock.


What do you mean with "invalidate GPUVM mappings" in this 
context? drm_gpuvm_bo_evict()
should only be called from a ttm_device_funcs::move callback, 
we should hold the dma-resv

lock there.


Well the question is which dma-resv lock do we hold?

In the move callback we only hold the dma-resv lock of the BO 
which is moved, but when that is a shared BO then that's not the 
same as the one for the VM.


Correct, Thomas' idea was to use the GEM's dma_resv lock to 
protect drm_gpuvm_bo::evicted
and then actually move the drm_gpuvm_bo to the VM's evicted list 
once we grabbed all
dma-resv locks when locking the VM's BOs using drm_exec. We can 
remove them from the evicted
list on validate(). This way we never touch the evicted list 
without holding at least the VM's

dma-resv lock.

Do you have any concerns about that?


Scratching my head a bit how that is supposed to work.

This implies that you go over all the evicted BOs during 
validation and not just the one mentioned in the CS.


That might work for Vulkan, but is pretty much a no-go for OpenGL.









See what the eviction lock in amdgpu is doing for example.


The eviction_lock seems to protect a VM state "evicting" of 
whether any BO that
is associated with the VM is currently evicting. At the same 
time amdgpu protects
the eviceted list of the VM with a different lock. So this 
seems to be entirely
unrelated. Tracking a "currently evicting" state is not part of 
the GPUVM
implementation currently and hence nothing would change for 
amdgpu there.


Sorry for the confusion we use different terminology in amdgpu.

The eviction lock and evicted state is for the VM page tables, 
e.g. if the whole VM is currently not used and swapped out or 
even de-allocated.


This is necessary because we have cases where we need to access 
the VM data without holding the dma-resv lock of this VM. 
Especially figuring out which parts of an address space contain 
mappings and which doesn't.


I think this is fine, this has nothing to do with lists of 
evicted GEM objects or external GEM
objects, right? Marking mappings (drm_gpuva) as invalidated 
(DRM_GPUVA_INVALIDATED) or accessing

the VA space does not require any dma-resv locks.


I hope so, but I'm not 100% sure.





This is a requirement which comes with HMM handling, you won't 
see this with Vulkan (or OpenGL, VAAPI etc..).



The invalidation lock on the other hand is what in this 
discussion is called eviction lock. This one is needed because 
what I wrote above, during the move callback only the dma-resv 
of the BO which is moved is locked, but not necessarily the 
dma-resv of the VM.


That's yet another thing, right? This is used to track whether 
*any* BO that belongs to the VM is
currently being evicted, correct? As mentioned, as by now this is 
not supported in GPUVM and hence
would be the same driver specific code with the same driver 
specifc lock.


That is most likely a show stopper using this for OpenGL based 
workloads as far as I can see. For those you need to able to 
figure out which non-VM BOs have been evicted and which parts of 
the VM needs updates.


We identify those with a bool in the gpuvm_bo, and that bool is 
protected by the bo_resv. In essence, the "evicted" list must be 
made up-to-date with all relevant locks held before traversing in 
the next exec.


What I still miss with this idea is how do we find all the 
drm_gpuvm_bo structures with the evicted bool set to true? When 
doing the drm_exec dance we come across all external ones and can 
add them to the list if needed, but what about the BOs having the 
VM's dma-resv?


Oh, they can be added to the evict list directly (no bool needed) in 
the eviction code, like in v3. Since for those we indeed hold the 
VM's dma_resv since it's aliased with the object's dma-resv.


Yeah, I wanted to note what Danilo seems to think about as well. How 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-19 Thread Christian König

Am 19.09.23 um 17:23 schrieb Thomas Hellström:


On 9/19/23 17:16, Danilo Krummrich wrote:

On 9/19/23 14:21, Thomas Hellström wrote:

Hi Christian

On 9/19/23 14:07, Christian König wrote:

Am 13.09.23 um 17:46 schrieb Danilo Krummrich:

On 9/13/23 17:33, Christian König wrote:

Am 13.09.23 um 17:15 schrieb Danilo Krummrich:

On 9/13/23 16:26, Christian König wrote:

Am 13.09.23 um 14:16 schrieb Danilo Krummrich:
As mentioned in a different mail thread, the reply is based on 
the assumption
that we don't support anything else than GPUVM updates from 
the IOCTL.


I think that this assumption is incorrect.


Well, more precisely I should have said "don't support GPUVM 
updated from within
fence signaling critical sections". And looking at the code, 
that doesn't seem what

you're doing there.



Vulkan is just once specific use case, but this here should 
probably be able to handle other use cases as well.


Especially with HMM you get the requirement that you need to be 
able to invalidate GPUVM mappings without grabbing a 
reservation lock.


What do you mean with "invalidate GPUVM mappings" in this 
context? drm_gpuvm_bo_evict()
should only be called from a ttm_device_funcs::move callback, 
we should hold the dma-resv

lock there.


Well the question is which dma-resv lock do we hold?

In the move callback we only hold the dma-resv lock of the BO 
which is moved, but when that is a shared BO then that's not the 
same as the one for the VM.


Correct, Thomas' idea was to use the GEM's dma_resv lock to 
protect drm_gpuvm_bo::evicted
and then actually move the drm_gpuvm_bo to the VM's evicted list 
once we grabbed all
dma-resv locks when locking the VM's BOs using drm_exec. We can 
remove them from the evicted
list on validate(). This way we never touch the evicted list 
without holding at least the VM's

dma-resv lock.

Do you have any concerns about that?


Scratching my head a bit how that is supposed to work.

This implies that you go over all the evicted BOs during validation 
and not just the one mentioned in the CS.


That might work for Vulkan, but is pretty much a no-go for OpenGL.









See what the eviction lock in amdgpu is doing for example.


The eviction_lock seems to protect a VM state "evicting" of 
whether any BO that
is associated with the VM is currently evicting. At the same 
time amdgpu protects
the eviceted list of the VM with a different lock. So this seems 
to be entirely
unrelated. Tracking a "currently evicting" state is not part of 
the GPUVM
implementation currently and hence nothing would change for 
amdgpu there.


Sorry for the confusion we use different terminology in amdgpu.

The eviction lock and evicted state is for the VM page tables, 
e.g. if the whole VM is currently not used and swapped out or 
even de-allocated.


This is necessary because we have cases where we need to access 
the VM data without holding the dma-resv lock of this VM. 
Especially figuring out which parts of an address space contain 
mappings and which doesn't.


I think this is fine, this has nothing to do with lists of evicted 
GEM objects or external GEM
objects, right? Marking mappings (drm_gpuva) as invalidated 
(DRM_GPUVA_INVALIDATED) or accessing

the VA space does not require any dma-resv locks.


I hope so, but I'm not 100% sure.





This is a requirement which comes with HMM handling, you won't 
see this with Vulkan (or OpenGL, VAAPI etc..).



The invalidation lock on the other hand is what in this 
discussion is called eviction lock. This one is needed because 
what I wrote above, during the move callback only the dma-resv of 
the BO which is moved is locked, but not necessarily the dma-resv 
of the VM.


That's yet another thing, right? This is used to track whether 
*any* BO that belongs to the VM is
currently being evicted, correct? As mentioned, as by now this is 
not supported in GPUVM and hence
would be the same driver specific code with the same driver 
specifc lock.


That is most likely a show stopper using this for OpenGL based 
workloads as far as I can see. For those you need to able to figure 
out which non-VM BOs have been evicted and which parts of the VM 
needs updates.


We identify those with a bool in the gpuvm_bo, and that bool is 
protected by the bo_resv. In essence, the "evicted" list must be 
made up-to-date with all relevant locks held before traversing in 
the next exec.


What I still miss with this idea is how do we find all the 
drm_gpuvm_bo structures with the evicted bool set to true? When doing 
the drm_exec dance we come across all external ones and can add them 
to the list if needed, but what about the BOs having the VM's dma-resv?


Oh, they can be added to the evict list directly (no bool needed) in 
the eviction code, like in v3. Since for those we indeed hold the VM's 
dma_resv since it's aliased with the object's dma-resv.


Yeah, I wanted to note what Danilo seems to think about as well. How do 
we figure out the non-VM BOs evicted?


We 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-19 Thread Thomas Hellström



On 9/19/23 17:16, Danilo Krummrich wrote:

On 9/19/23 14:21, Thomas Hellström wrote:

Hi Christian

On 9/19/23 14:07, Christian König wrote:

Am 13.09.23 um 17:46 schrieb Danilo Krummrich:

On 9/13/23 17:33, Christian König wrote:

Am 13.09.23 um 17:15 schrieb Danilo Krummrich:

On 9/13/23 16:26, Christian König wrote:

Am 13.09.23 um 14:16 schrieb Danilo Krummrich:
As mentioned in a different mail thread, the reply is based on 
the assumption
that we don't support anything else than GPUVM updates from the 
IOCTL.


I think that this assumption is incorrect.


Well, more precisely I should have said "don't support GPUVM 
updated from within
fence signaling critical sections". And looking at the code, that 
doesn't seem what

you're doing there.



Vulkan is just once specific use case, but this here should 
probably be able to handle other use cases as well.


Especially with HMM you get the requirement that you need to be 
able to invalidate GPUVM mappings without grabbing a reservation 
lock.


What do you mean with "invalidate GPUVM mappings" in this 
context? drm_gpuvm_bo_evict()
should only be called from a ttm_device_funcs::move callback, we 
should hold the dma-resv

lock there.


Well the question is which dma-resv lock do we hold?

In the move callback we only hold the dma-resv lock of the BO 
which is moved, but when that is a shared BO then that's not the 
same as the one for the VM.


Correct, Thomas' idea was to use the GEM's dma_resv lock to protect 
drm_gpuvm_bo::evicted
and then actually move the drm_gpuvm_bo to the VM's evicted list 
once we grabbed all
dma-resv locks when locking the VM's BOs using drm_exec. We can 
remove them from the evicted
list on validate(). This way we never touch the evicted list 
without holding at least the VM's

dma-resv lock.

Do you have any concerns about that?


Scratching my head a bit how that is supposed to work.

This implies that you go over all the evicted BOs during validation 
and not just the one mentioned in the CS.


That might work for Vulkan, but is pretty much a no-go for OpenGL.









See what the eviction lock in amdgpu is doing for example.


The eviction_lock seems to protect a VM state "evicting" of 
whether any BO that
is associated with the VM is currently evicting. At the same time 
amdgpu protects
the eviceted list of the VM with a different lock. So this seems 
to be entirely
unrelated. Tracking a "currently evicting" state is not part of 
the GPUVM
implementation currently and hence nothing would change for 
amdgpu there.


Sorry for the confusion we use different terminology in amdgpu.

The eviction lock and evicted state is for the VM page tables, 
e.g. if the whole VM is currently not used and swapped out or even 
de-allocated.


This is necessary because we have cases where we need to access 
the VM data without holding the dma-resv lock of this VM. 
Especially figuring out which parts of an address space contain 
mappings and which doesn't.


I think this is fine, this has nothing to do with lists of evicted 
GEM objects or external GEM
objects, right? Marking mappings (drm_gpuva) as invalidated 
(DRM_GPUVA_INVALIDATED) or accessing

the VA space does not require any dma-resv locks.


I hope so, but I'm not 100% sure.





This is a requirement which comes with HMM handling, you won't see 
this with Vulkan (or OpenGL, VAAPI etc..).



The invalidation lock on the other hand is what in this discussion 
is called eviction lock. This one is needed because what I wrote 
above, during the move callback only the dma-resv of the BO which 
is moved is locked, but not necessarily the dma-resv of the VM.


That's yet another thing, right? This is used to track whether 
*any* BO that belongs to the VM is
currently being evicted, correct? As mentioned, as by now this is 
not supported in GPUVM and hence
would be the same driver specific code with the same driver specifc 
lock.


That is most likely a show stopper using this for OpenGL based 
workloads as far as I can see. For those you need to able to figure 
out which non-VM BOs have been evicted and which parts of the VM 
needs updates.


We identify those with a bool in the gpuvm_bo, and that bool is 
protected by the bo_resv. In essence, the "evicted" list must be made 
up-to-date with all relevant locks held before traversing in the next 
exec.


What I still miss with this idea is how do we find all the 
drm_gpuvm_bo structures with the evicted bool set to true? When doing 
the drm_exec dance we come across all external ones and can add them 
to the list if needed, but what about the BOs having the VM's dma-resv?


Oh, they can be added to the evict list directly (no bool needed) in the 
eviction code, like in v3. Since for those we indeed hold the VM's 
dma_resv since it's aliased with the object's dma-resv.


/Thomas







If you mean that we need to unbind all vmas of all vms of evicted bos 
before evicting, We don't do that, at least not in Xe, since evicting 
we wait 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-19 Thread Danilo Krummrich

On 9/19/23 14:21, Thomas Hellström wrote:

Hi Christian

On 9/19/23 14:07, Christian König wrote:

Am 13.09.23 um 17:46 schrieb Danilo Krummrich:

On 9/13/23 17:33, Christian König wrote:

Am 13.09.23 um 17:15 schrieb Danilo Krummrich:

On 9/13/23 16:26, Christian König wrote:

Am 13.09.23 um 14:16 schrieb Danilo Krummrich:

As mentioned in a different mail thread, the reply is based on the assumption
that we don't support anything else than GPUVM updates from the IOCTL.


I think that this assumption is incorrect.


Well, more precisely I should have said "don't support GPUVM updated from within
fence signaling critical sections". And looking at the code, that doesn't seem 
what
you're doing there.



Vulkan is just once specific use case, but this here should probably be able to 
handle other use cases as well.

Especially with HMM you get the requirement that you need to be able to 
invalidate GPUVM mappings without grabbing a reservation lock.


What do you mean with "invalidate GPUVM mappings" in this context? 
drm_gpuvm_bo_evict()
should only be called from a ttm_device_funcs::move callback, we should hold 
the dma-resv
lock there.


Well the question is which dma-resv lock do we hold?

In the move callback we only hold the dma-resv lock of the BO which is moved, 
but when that is a shared BO then that's not the same as the one for the VM.


Correct, Thomas' idea was to use the GEM's dma_resv lock to protect 
drm_gpuvm_bo::evicted
and then actually move the drm_gpuvm_bo to the VM's evicted list once we 
grabbed all
dma-resv locks when locking the VM's BOs using drm_exec. We can remove them 
from the evicted
list on validate(). This way we never touch the evicted list without holding at 
least the VM's
dma-resv lock.

Do you have any concerns about that?


Scratching my head a bit how that is supposed to work.

This implies that you go over all the evicted BOs during validation and not 
just the one mentioned in the CS.

That might work for Vulkan, but is pretty much a no-go for OpenGL.









See what the eviction lock in amdgpu is doing for example.


The eviction_lock seems to protect a VM state "evicting" of whether any BO that
is associated with the VM is currently evicting. At the same time amdgpu 
protects
the eviceted list of the VM with a different lock. So this seems to be entirely
unrelated. Tracking a "currently evicting" state is not part of the GPUVM
implementation currently and hence nothing would change for amdgpu there.


Sorry for the confusion we use different terminology in amdgpu.

The eviction lock and evicted state is for the VM page tables, e.g. if the 
whole VM is currently not used and swapped out or even de-allocated.

This is necessary because we have cases where we need to access the VM data 
without holding the dma-resv lock of this VM. Especially figuring out which 
parts of an address space contain mappings and which doesn't.


I think this is fine, this has nothing to do with lists of evicted GEM objects 
or external GEM
objects, right? Marking mappings (drm_gpuva) as invalidated 
(DRM_GPUVA_INVALIDATED) or accessing
the VA space does not require any dma-resv locks.


I hope so, but I'm not 100% sure.





This is a requirement which comes with HMM handling, you won't see this with 
Vulkan (or OpenGL, VAAPI etc..).


The invalidation lock on the other hand is what in this discussion is called 
eviction lock. This one is needed because what I wrote above, during the move 
callback only the dma-resv of the BO which is moved is locked, but not 
necessarily the dma-resv of the VM.


That's yet another thing, right? This is used to track whether *any* BO that 
belongs to the VM is
currently being evicted, correct? As mentioned, as by now this is not supported 
in GPUVM and hence
would be the same driver specific code with the same driver specifc lock.


That is most likely a show stopper using this for OpenGL based workloads as far 
as I can see. For those you need to able to figure out which non-VM BOs have 
been evicted and which parts of the VM needs updates.


We identify those with a bool in the gpuvm_bo, and that bool is protected by the bo_resv. 
In essence, the "evicted" list must be made up-to-date with all relevant locks 
held before traversing in the next exec.


What I still miss with this idea is how do we find all the drm_gpuvm_bo 
structures with the evicted bool set to true? When doing the drm_exec dance we 
come across all external ones and can add them to the list if needed, but what 
about the BOs having the VM's dma-resv?



If you mean that we need to unbind all vmas of all vms of evicted bos before 
evicting, We don't do that, at least not in Xe, since evicting we wait for VM 
idle, and it cant access anything through the stale vmas until they have been 
revalidated and rebound.

/Thomas







Regards,
Christian.





Regards,
Christian.



On Wed, Sep 13, 2023 at 11:14:46AM +0200, Thomas Hellström wrote:

Hi!

On Wed, 2023-09-13 at 01:36 +0200, 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-19 Thread Thomas Hellström

Hi Christian

On 9/19/23 14:07, Christian König wrote:

Am 13.09.23 um 17:46 schrieb Danilo Krummrich:

On 9/13/23 17:33, Christian König wrote:

Am 13.09.23 um 17:15 schrieb Danilo Krummrich:

On 9/13/23 16:26, Christian König wrote:

Am 13.09.23 um 14:16 schrieb Danilo Krummrich:
As mentioned in a different mail thread, the reply is based on 
the assumption
that we don't support anything else than GPUVM updates from the 
IOCTL.


I think that this assumption is incorrect.


Well, more precisely I should have said "don't support GPUVM 
updated from within
fence signaling critical sections". And looking at the code, that 
doesn't seem what

you're doing there.



Vulkan is just once specific use case, but this here should 
probably be able to handle other use cases as well.


Especially with HMM you get the requirement that you need to be 
able to invalidate GPUVM mappings without grabbing a reservation 
lock.


What do you mean with "invalidate GPUVM mappings" in this context? 
drm_gpuvm_bo_evict()
should only be called from a ttm_device_funcs::move callback, we 
should hold the dma-resv

lock there.


Well the question is which dma-resv lock do we hold?

In the move callback we only hold the dma-resv lock of the BO which 
is moved, but when that is a shared BO then that's not the same as 
the one for the VM.


Correct, Thomas' idea was to use the GEM's dma_resv lock to protect 
drm_gpuvm_bo::evicted
and then actually move the drm_gpuvm_bo to the VM's evicted list once 
we grabbed all
dma-resv locks when locking the VM's BOs using drm_exec. We can 
remove them from the evicted
list on validate(). This way we never touch the evicted list without 
holding at least the VM's

dma-resv lock.

Do you have any concerns about that?


Scratching my head a bit how that is supposed to work.

This implies that you go over all the evicted BOs during validation 
and not just the one mentioned in the CS.


That might work for Vulkan, but is pretty much a no-go for OpenGL.









See what the eviction lock in amdgpu is doing for example.


The eviction_lock seems to protect a VM state "evicting" of whether 
any BO that
is associated with the VM is currently evicting. At the same time 
amdgpu protects
the eviceted list of the VM with a different lock. So this seems to 
be entirely
unrelated. Tracking a "currently evicting" state is not part of the 
GPUVM
implementation currently and hence nothing would change for amdgpu 
there.


Sorry for the confusion we use different terminology in amdgpu.

The eviction lock and evicted state is for the VM page tables, e.g. 
if the whole VM is currently not used and swapped out or even 
de-allocated.


This is necessary because we have cases where we need to access the 
VM data without holding the dma-resv lock of this VM. Especially 
figuring out which parts of an address space contain mappings and 
which doesn't.


I think this is fine, this has nothing to do with lists of evicted 
GEM objects or external GEM
objects, right? Marking mappings (drm_gpuva) as invalidated 
(DRM_GPUVA_INVALIDATED) or accessing

the VA space does not require any dma-resv locks.


I hope so, but I'm not 100% sure.





This is a requirement which comes with HMM handling, you won't see 
this with Vulkan (or OpenGL, VAAPI etc..).



The invalidation lock on the other hand is what in this discussion 
is called eviction lock. This one is needed because what I wrote 
above, during the move callback only the dma-resv of the BO which is 
moved is locked, but not necessarily the dma-resv of the VM.


That's yet another thing, right? This is used to track whether *any* 
BO that belongs to the VM is
currently being evicted, correct? As mentioned, as by now this is not 
supported in GPUVM and hence
would be the same driver specific code with the same driver specifc 
lock.


That is most likely a show stopper using this for OpenGL based 
workloads as far as I can see. For those you need to able to figure 
out which non-VM BOs have been evicted and which parts of the VM needs 
updates.


We identify those with a bool in the gpuvm_bo, and that bool is 
protected by the bo_resv. In essence, the "evicted" list must be made 
up-to-date with all relevant locks held before traversing in the next exec.


If you mean that we need to unbind all vmas of all vms of evicted bos 
before evicting, We don't do that, at least not in Xe, since evicting we 
wait for VM idle, and it cant access anything through the stale vmas 
until they have been revalidated and rebound.


/Thomas







Regards,
Christian.





Regards,
Christian.



On Wed, Sep 13, 2023 at 11:14:46AM +0200, Thomas Hellström wrote:

Hi!

On Wed, 2023-09-13 at 01:36 +0200, Danilo Krummrich wrote:

On Tue, Sep 12, 2023 at 09:23:08PM +0200, Thomas Hellström wrote:

On 9/12/23 18:50, Danilo Krummrich wrote:
On Tue, Sep 12, 2023 at 06:20:32PM +0200, Thomas Hellström 
wrote:

Hi, Danilo,

On 9/9/23 17:31, Danilo Krummrich wrote:

So far the DRM GPUVA manager offers 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-19 Thread Christian König

Am 13.09.23 um 17:46 schrieb Danilo Krummrich:

On 9/13/23 17:33, Christian König wrote:

Am 13.09.23 um 17:15 schrieb Danilo Krummrich:

On 9/13/23 16:26, Christian König wrote:

Am 13.09.23 um 14:16 schrieb Danilo Krummrich:
As mentioned in a different mail thread, the reply is based on the 
assumption
that we don't support anything else than GPUVM updates from the 
IOCTL.


I think that this assumption is incorrect.


Well, more precisely I should have said "don't support GPUVM updated 
from within
fence signaling critical sections". And looking at the code, that 
doesn't seem what

you're doing there.



Vulkan is just once specific use case, but this here should 
probably be able to handle other use cases as well.


Especially with HMM you get the requirement that you need to be 
able to invalidate GPUVM mappings without grabbing a reservation lock.


What do you mean with "invalidate GPUVM mappings" in this context? 
drm_gpuvm_bo_evict()
should only be called from a ttm_device_funcs::move callback, we 
should hold the dma-resv

lock there.


Well the question is which dma-resv lock do we hold?

In the move callback we only hold the dma-resv lock of the BO which 
is moved, but when that is a shared BO then that's not the same as 
the one for the VM.


Correct, Thomas' idea was to use the GEM's dma_resv lock to protect 
drm_gpuvm_bo::evicted
and then actually move the drm_gpuvm_bo to the VM's evicted list once 
we grabbed all
dma-resv locks when locking the VM's BOs using drm_exec. We can remove 
them from the evicted
list on validate(). This way we never touch the evicted list without 
holding at least the VM's

dma-resv lock.

Do you have any concerns about that?


Scratching my head a bit how that is supposed to work.

This implies that you go over all the evicted BOs during validation and 
not just the one mentioned in the CS.


That might work for Vulkan, but is pretty much a no-go for OpenGL.









See what the eviction lock in amdgpu is doing for example.


The eviction_lock seems to protect a VM state "evicting" of whether 
any BO that
is associated with the VM is currently evicting. At the same time 
amdgpu protects
the eviceted list of the VM with a different lock. So this seems to 
be entirely
unrelated. Tracking a "currently evicting" state is not part of the 
GPUVM
implementation currently and hence nothing would change for amdgpu 
there.


Sorry for the confusion we use different terminology in amdgpu.

The eviction lock and evicted state is for the VM page tables, e.g. 
if the whole VM is currently not used and swapped out or even 
de-allocated.


This is necessary because we have cases where we need to access the 
VM data without holding the dma-resv lock of this VM. Especially 
figuring out which parts of an address space contain mappings and 
which doesn't.


I think this is fine, this has nothing to do with lists of evicted GEM 
objects or external GEM
objects, right? Marking mappings (drm_gpuva) as invalidated 
(DRM_GPUVA_INVALIDATED) or accessing

the VA space does not require any dma-resv locks.


I hope so, but I'm not 100% sure.





This is a requirement which comes with HMM handling, you won't see 
this with Vulkan (or OpenGL, VAAPI etc..).



The invalidation lock on the other hand is what in this discussion is 
called eviction lock. This one is needed because what I wrote above, 
during the move callback only the dma-resv of the BO which is moved 
is locked, but not necessarily the dma-resv of the VM.


That's yet another thing, right? This is used to track whether *any* 
BO that belongs to the VM is
currently being evicted, correct? As mentioned, as by now this is not 
supported in GPUVM and hence

would be the same driver specific code with the same driver specifc lock.


That is most likely a show stopper using this for OpenGL based workloads 
as far as I can see. For those you need to able to figure out which 
non-VM BOs have been evicted and which parts of the VM needs updates.


BTW: Do I got it right that you put the dma_resv object into the VM and 
not into the first GEM object associated with the VM? If yes then that 
would be a circle dependency.


Regards,
Christian.







Regards,
Christian.





Regards,
Christian.



On Wed, Sep 13, 2023 at 11:14:46AM +0200, Thomas Hellström wrote:

Hi!

On Wed, 2023-09-13 at 01:36 +0200, Danilo Krummrich wrote:

On Tue, Sep 12, 2023 at 09:23:08PM +0200, Thomas Hellström wrote:

On 9/12/23 18:50, Danilo Krummrich wrote:

On Tue, Sep 12, 2023 at 06:20:32PM +0200, Thomas Hellström wrote:

Hi, Danilo,

On 9/9/23 17:31, Danilo Krummrich wrote:

So far the DRM GPUVA manager offers common infrastructure to
track GPU VA
allocations and mappings, generically connect GPU VA mappings
to their
backing buffers and perform more complex mapping operations
on the GPU VA
space.

However, there are more design patterns commonly used by
drivers, which
can potentially be generalized in order to make the DRM GPUVA
manager
represent a basic 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-18 Thread Danilo Krummrich

On 9/14/23 19:15, Danilo Krummrich wrote:

On 9/14/23 19:13, Thomas Hellström wrote:

On Thu, 2023-09-14 at 17:27 +0200, Danilo Krummrich wrote:

On 9/14/23 13:32, Thomas Hellström wrote:


On 9/14/23 12:57, Danilo Krummrich wrote:

On 9/13/23 14:16, Danilo Krummrich wrote:




And validate() can remove it while still holding all dma-
resv locks,
neat!
However, what if two tasks are trying to lock the VA space
concurrently? What
do we do when the drm_gpuvm_bo's refcount drops to zero in
drm_gpuva_unlink()?
Are we guaranteed that at this point of time the
drm_gpuvm_bo is not
on the
evicted list? Because otherwise we would call
drm_gpuvm_bo_destroy()
with the
dma-resv lock held, which wouldn't be allowed, since
drm_gpuvm_bo_destroy()
might drop the last reference to the drm_gem_object and
hence we'd
potentially
free the dma-resv lock while holding it, at least if it's
an external
object.


Easiest way in this scheme is to think of the lists as being
protected
by the vm's resv lock. That means anybody calling unlink()
must also
hold the vm's resv lock. (Which is OK from an UAF point of
view, but
perhaps not from a locking inversion POW from an async list
update).


This would mean that on unlink() we'd need to hold the VM's
resv lock and the
corresponding GEM's resv lock (in case they're not the same
anyways) because the
VM's resv lock would protect the external / evicted object
lists and the GEM
objects resv lock protects the GEM's list of drm_gpuvm_bos and
the
drm_gpuvm_bo's list of drm_gpuvas.


As mentioned below the same applies for drm_gpuvm_bo_put() since
it might
destroy the vm_bo, which includes removing the vm_bo from
external / evicted
object lists and the GEMs list of vm_bos.

As mentioned, if the GEM's dma-resv is different from the VM's
dma-resv we need
to take both locks. Ultimately, this would mean we need a
drm_exec loop, because
we can't know the order in which to take these locks. Doing a
full drm_exec loop
just to put() a vm_bo doesn't sound reasonable to me.

Can we instead just have an internal mutex for locking the lists
such that we
avoid taking and dropping the spinlocks, which we use currently,
in a loop?


You'd have the same locking inversion problem with a mutex, right?
Since in the eviction path you have resv->mutex, from exec you have
resv->mutex->resv because validate would attempt to grab resv.


Both lists, evict and extobj, would need to have a separate mutex,
not a common one.
We'd also need a dedicated GEM gpuva lock. Then the only rule would
be that you can't
hold the dma-resv lock when calling put(). Which I admit is not that
nice.

With the current spinlock solution drivers wouldn't need to worry
about anything locking
related though. So maybe I come back to your proposal of having a
switch for external
locking with dma-resv locks entirely. Such that with external dma-
resv locking I skip
all the spinlocks and add lockdep checks instead.

I think that makes the most sense in terms of taking advantage of
external dma-resv locking
where possible and on the other hand having a self-contained solution
if not. This should
get all concerns out of the way, yours, Christian's and Boris'.


If we need additional locks yes, I'd prefer the opt-in/opt-out spinlock
solution, and check back after a while to see if we can remove either
option once most pitfalls are hit.


Sounds good, I'll prepare this for a V4.


I was considering getting rid of the spinlocks using srcu for both external and
evicted objects instead. This would get us rid of taking/dropping the spinlock 
in
every iteration step of the lists, limiting it to a single 
srcu_read_{lock,unlock}
call per list walk. Plus, obviously the list_add_rcu() and list_del_rcu() 
variants
as accessors. The accessors, would probably still need a spinlock to protect 
against
concurrent list_add_rcu()/list_del_rcu() calls, but I think those are not a 
concern.

Any concerns from your side with variant?



- Danilo



Thanks,
/Thomas






That said, xe currently indeed does the vm+bo exec dance on vma
put.

One reason why that seemingly horrible construct is good, is that
when evicting an extobj and you need to access individual vmas to
Zap page table entries or TLB flush, those VMAs are not allowed to
go away (we're not refcounting them). Holding the bo resv on gpuva
put prevents that from happening. Possibly one could use another
mutex to protect the gem->vm_bo list to achieve the same, but we'd
need to hold it on gpuva put.

/Thomas




- Danilo









For extobjs an outer lock would be enough in case of
Xe, but I
really would not
like to add even more complexity just to get the
spinlock out of
the way in case
the driver already has an outer lock protecting this
path.


I must disagree here. These spinlocks and atomic
operations are
pretty
costly and as discussed earlier this type of locking was
the reason
(at
least according to the commit message) that made
Christian drop the
XArray
use in drm_exec for the same set of objects: 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-14 Thread Thomas Hellström
On Thu, 2023-09-14 at 19:25 +0200, Danilo Krummrich wrote:
> On 9/14/23 19:21, Thomas Hellström wrote:
> > On Thu, 2023-09-14 at 18:36 +0200, Danilo Krummrich wrote:
> > > On 9/14/23 15:48, Thomas Hellström wrote:
> > > > Hi, Danilo
> > > > 
> > > > Some additional minor comments as xe conversion progresses.
> > > > 
> > > > On 9/9/23 17:31, Danilo Krummrich wrote:
> > > > > So far the DRM GPUVA manager offers common infrastructure to
> > > > > track GPU VA
> > > > > allocations and mappings, generically connect GPU VA mappings
> > > > > to
> > > > > their
> > > > > backing buffers and perform more complex mapping operations
> > > > > on
> > > > > the GPU VA
> > > > > space.
> > > > > 
> > > > > However, there are more design patterns commonly used by
> > > > > drivers,
> > > > > which
> > > > > can potentially be generalized in order to make the DRM GPUVA
> > > > > manager
> > > > > represent a basic GPU-VM implementation. In this context,
> > > > > this
> > > > > patch aims
> > > > > at generalizing the following elements.
> > > > > 
> > > > > 1) Provide a common dma-resv for GEM objects not being used
> > > > > outside of
> > > > >  this GPU-VM.
> > > > > 
> > > > > 2) Provide tracking of external GEM objects (GEM objects
> > > > > which
> > > > > are
> > > > >  shared with other GPU-VMs).
> > > > > 
> > > > > 3) Provide functions to efficiently lock all GEM objects dma-
> > > > > resv
> > > > > the
> > > > >  GPU-VM contains mappings of.
> > > > > 
> > > > > 4) Provide tracking of evicted GEM objects the GPU-VM
> > > > > contains
> > > > > mappings
> > > > >  of, such that validation of evicted GEM objects is
> > > > > accelerated.
> > > > > 
> > > > > 5) Provide some convinience functions for common patterns.
> > > > > 
> > > > > Rather than being designed as a "framework", the target is to
> > > > > make all
> > > > > features appear as a collection of optional helper functions,
> > > > > such that
> > > > > drivers are free to make use of the DRM GPUVA managers basic
> > > > > functionality and opt-in for other features without setting
> > > > > any
> > > > > feature
> > > > > flags, just by making use of the corresponding functions.
> > > > > 
> > > > > Big kudos to Boris Brezillon for his help to figure out
> > > > > locking
> > > > > for drivers
> > > > > updating the GPU VA space within the fence signalling path.
> > > > > 
> > > > > Suggested-by: Matthew Brost 
> > > > > Signed-off-by: Danilo Krummrich 
> > > > > ---
> > > > > 
> > > > > +/**
> > > > > + * drm_gpuvm_bo_evict() - add / remove a _gem_object to
> > > > > /
> > > > > from a
> > > > > + * _gpuvms evicted list
> > > > > + * @obj: the _gem_object to add or remove
> > > > > + * @evict: indicates whether the object is evicted
> > > > > + *
> > > > > + * Adds a _gem_object to or removes it from all
> > > > > _gpuvms
> > > > > evicted
> > > > > + * list containing a mapping of this _gem_object.
> > > > > + */
> > > > > +void
> > > > > +drm_gpuvm_bo_evict(struct drm_gem_object *obj, bool evict)
> > > > > +{
> > > > > +    struct drm_gpuvm_bo *vm_bo;
> > > > > +
> > > > > +    drm_gem_for_each_gpuvm_bo(vm_bo, obj) {
> > > > > +    if (evict)
> > > > > +    drm_gpuvm_bo_list_add(vm_bo, evict);
> > > > > +    else
> > > > > +    drm_gpuvm_bo_list_del(vm_bo, evict);
> > > > > +    }
> > > > > +}
> > > > > +EXPORT_SYMBOL_GPL(drm_gpuvm_bo_evict);
> > > > > +
> > > > 
> > > > We need a drm_gpuvm_bo_evict(struct drm_gpuvm_bo *vm_bo, ...)
> > > > that
> > > > puts a single gpuvm_bo on the list, the above function could
> > > > perhaps be renamed as drm_gpuvm_gem_obj_evict(obj, ).
> > > 
> > > Makes sense - gonna change that.
> > > 
> > > > 
> > > > Reason is some vm's are faulting vms which don't have an evict
> > > > list, but validate from the pagefault handler. Also evict ==
> > > > false
> > > > is dangerous because if called from within an exec, it might
> > > > remove
> > > > the obj from other vm's evict list before they've had a chance
> > > > to
> > > > rebind their VMAs.
> > > > 
> > > > >    static int
> > > > >    __drm_gpuva_insert(struct drm_gpuvm *gpuvm,
> > > > >   struct drm_gpuva *va)
> > > > > diff --git a/include/drm/drm_gpuvm.h
> > > > > b/include/drm/drm_gpuvm.h
> > > > > index afa50b9059a2..834bb6d6617e 100644
> > > > > --- a/include/drm/drm_gpuvm.h
> > > > > +++ b/include/drm/drm_gpuvm.h
> > > > > @@ -26,10 +26,12 @@
> > > > >     */
> > > > >    #include 
> > > > > +#include 
> > > > >    #include 
> > > > >    #include 
> > > > >    #include 
> > > > > +#include 
> > > > >    struct drm_gpuvm;
> > > > >    struct drm_gpuvm_bo;
> > > > > @@ -259,6 +261,38 @@ struct drm_gpuvm {
> > > > >     * space
> > > > >     */
> > > > >    struct dma_resv *resv;
> > > > > +
> > > > > +    /**
> > > > > + * @extobj: structure holding the extobj list
> > > > > + */
> > > > > +    struct {
> > > > > +    /**
> > > > > + * @list: _head storing 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-14 Thread Danilo Krummrich

On 9/14/23 19:21, Thomas Hellström wrote:

On Thu, 2023-09-14 at 18:36 +0200, Danilo Krummrich wrote:

On 9/14/23 15:48, Thomas Hellström wrote:

Hi, Danilo

Some additional minor comments as xe conversion progresses.

On 9/9/23 17:31, Danilo Krummrich wrote:

So far the DRM GPUVA manager offers common infrastructure to
track GPU VA
allocations and mappings, generically connect GPU VA mappings to
their
backing buffers and perform more complex mapping operations on
the GPU VA
space.

However, there are more design patterns commonly used by drivers,
which
can potentially be generalized in order to make the DRM GPUVA
manager
represent a basic GPU-VM implementation. In this context, this
patch aims
at generalizing the following elements.

1) Provide a common dma-resv for GEM objects not being used
outside of
     this GPU-VM.

2) Provide tracking of external GEM objects (GEM objects which
are
     shared with other GPU-VMs).

3) Provide functions to efficiently lock all GEM objects dma-resv
the
     GPU-VM contains mappings of.

4) Provide tracking of evicted GEM objects the GPU-VM contains
mappings
     of, such that validation of evicted GEM objects is
accelerated.

5) Provide some convinience functions for common patterns.

Rather than being designed as a "framework", the target is to
make all
features appear as a collection of optional helper functions,
such that
drivers are free to make use of the DRM GPUVA managers basic
functionality and opt-in for other features without setting any
feature
flags, just by making use of the corresponding functions.

Big kudos to Boris Brezillon for his help to figure out locking
for drivers
updating the GPU VA space within the fence signalling path.

Suggested-by: Matthew Brost 
Signed-off-by: Danilo Krummrich 
---

+/**
+ * drm_gpuvm_bo_evict() - add / remove a _gem_object to /
from a
+ * _gpuvms evicted list
+ * @obj: the _gem_object to add or remove
+ * @evict: indicates whether the object is evicted
+ *
+ * Adds a _gem_object to or removes it from all _gpuvms
evicted
+ * list containing a mapping of this _gem_object.
+ */
+void
+drm_gpuvm_bo_evict(struct drm_gem_object *obj, bool evict)
+{
+    struct drm_gpuvm_bo *vm_bo;
+
+    drm_gem_for_each_gpuvm_bo(vm_bo, obj) {
+    if (evict)
+    drm_gpuvm_bo_list_add(vm_bo, evict);
+    else
+    drm_gpuvm_bo_list_del(vm_bo, evict);
+    }
+}
+EXPORT_SYMBOL_GPL(drm_gpuvm_bo_evict);
+


We need a drm_gpuvm_bo_evict(struct drm_gpuvm_bo *vm_bo, ...) that
puts a single gpuvm_bo on the list, the above function could
perhaps be renamed as drm_gpuvm_gem_obj_evict(obj, ).


Makes sense - gonna change that.



Reason is some vm's are faulting vms which don't have an evict
list, but validate from the pagefault handler. Also evict == false
is dangerous because if called from within an exec, it might remove
the obj from other vm's evict list before they've had a chance to
rebind their VMAs.


   static int
   __drm_gpuva_insert(struct drm_gpuvm *gpuvm,
  struct drm_gpuva *va)
diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
index afa50b9059a2..834bb6d6617e 100644
--- a/include/drm/drm_gpuvm.h
+++ b/include/drm/drm_gpuvm.h
@@ -26,10 +26,12 @@
    */
   #include 
+#include 
   #include 
   #include 
   #include 
+#include 
   struct drm_gpuvm;
   struct drm_gpuvm_bo;
@@ -259,6 +261,38 @@ struct drm_gpuvm {
    * space
    */
   struct dma_resv *resv;
+
+    /**
+ * @extobj: structure holding the extobj list
+ */
+    struct {
+    /**
+ * @list: _head storing _gpuvm_bos serving as
+ * external object
+ */
+    struct list_head list;
+
+    /**
+ * @lock: spinlock to protect the extobj list
+ */
+    spinlock_t lock;
+    } extobj;
+
+    /**
+ * @evict: structure holding the evict list and evict list
lock
+ */
+    struct {
+    /**
+ * @list: _head storing _gpuvm_bos currently
being
+ * evicted
+ */
+    struct list_head list;
+
+    /**
+ * @lock: spinlock to protect the evict list
+ */
+    spinlock_t lock;
+    } evict;
   };
   void drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct drm_device
*drm,
@@ -268,6 +302,21 @@ void drm_gpuvm_init(struct drm_gpuvm *gpuvm,
struct drm_device *drm,
   const struct drm_gpuvm_ops *ops);
   void drm_gpuvm_destroy(struct drm_gpuvm *gpuvm);
+/**
+ * drm_gpuvm_is_extobj() - indicates whether the given
_gem_object is an
+ * external object
+ * @gpuvm: the _gpuvm to check
+ * @obj: the _gem_object to check
+ *
+ * Returns: true if the _gem_object _resv differs from
the
+ * _gpuvms _resv, false otherwise
+ */
+static inline bool drm_gpuvm_is_extobj(struct drm_gpuvm *gpuvm,
+   struct drm_gem_object *obj)
+{
+    return obj && obj->resv != gpuvm->resv;
+}
+
   static inline struct drm_gpuva *
   __drm_gpuva_next(struct drm_gpuva *va)
   {
@@ -346,6 +395,128 @@ __drm_gpuva_next(struct 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-14 Thread Thomas Hellström
On Thu, 2023-09-14 at 18:36 +0200, Danilo Krummrich wrote:
> On 9/14/23 15:48, Thomas Hellström wrote:
> > Hi, Danilo
> > 
> > Some additional minor comments as xe conversion progresses.
> > 
> > On 9/9/23 17:31, Danilo Krummrich wrote:
> > > So far the DRM GPUVA manager offers common infrastructure to
> > > track GPU VA
> > > allocations and mappings, generically connect GPU VA mappings to
> > > their
> > > backing buffers and perform more complex mapping operations on
> > > the GPU VA
> > > space.
> > > 
> > > However, there are more design patterns commonly used by drivers,
> > > which
> > > can potentially be generalized in order to make the DRM GPUVA
> > > manager
> > > represent a basic GPU-VM implementation. In this context, this
> > > patch aims
> > > at generalizing the following elements.
> > > 
> > > 1) Provide a common dma-resv for GEM objects not being used
> > > outside of
> > >     this GPU-VM.
> > > 
> > > 2) Provide tracking of external GEM objects (GEM objects which
> > > are
> > >     shared with other GPU-VMs).
> > > 
> > > 3) Provide functions to efficiently lock all GEM objects dma-resv
> > > the
> > >     GPU-VM contains mappings of.
> > > 
> > > 4) Provide tracking of evicted GEM objects the GPU-VM contains
> > > mappings
> > >     of, such that validation of evicted GEM objects is
> > > accelerated.
> > > 
> > > 5) Provide some convinience functions for common patterns.
> > > 
> > > Rather than being designed as a "framework", the target is to
> > > make all
> > > features appear as a collection of optional helper functions,
> > > such that
> > > drivers are free to make use of the DRM GPUVA managers basic
> > > functionality and opt-in for other features without setting any
> > > feature
> > > flags, just by making use of the corresponding functions.
> > > 
> > > Big kudos to Boris Brezillon for his help to figure out locking
> > > for drivers
> > > updating the GPU VA space within the fence signalling path.
> > > 
> > > Suggested-by: Matthew Brost 
> > > Signed-off-by: Danilo Krummrich 
> > > ---
> > > 
> > > +/**
> > > + * drm_gpuvm_bo_evict() - add / remove a _gem_object to /
> > > from a
> > > + * _gpuvms evicted list
> > > + * @obj: the _gem_object to add or remove
> > > + * @evict: indicates whether the object is evicted
> > > + *
> > > + * Adds a _gem_object to or removes it from all _gpuvms
> > > evicted
> > > + * list containing a mapping of this _gem_object.
> > > + */
> > > +void
> > > +drm_gpuvm_bo_evict(struct drm_gem_object *obj, bool evict)
> > > +{
> > > +    struct drm_gpuvm_bo *vm_bo;
> > > +
> > > +    drm_gem_for_each_gpuvm_bo(vm_bo, obj) {
> > > +    if (evict)
> > > +    drm_gpuvm_bo_list_add(vm_bo, evict);
> > > +    else
> > > +    drm_gpuvm_bo_list_del(vm_bo, evict);
> > > +    }
> > > +}
> > > +EXPORT_SYMBOL_GPL(drm_gpuvm_bo_evict);
> > > +
> > 
> > We need a drm_gpuvm_bo_evict(struct drm_gpuvm_bo *vm_bo, ...) that
> > puts a single gpuvm_bo on the list, the above function could
> > perhaps be renamed as drm_gpuvm_gem_obj_evict(obj, ).
> 
> Makes sense - gonna change that.
> 
> > 
> > Reason is some vm's are faulting vms which don't have an evict
> > list, but validate from the pagefault handler. Also evict == false
> > is dangerous because if called from within an exec, it might remove
> > the obj from other vm's evict list before they've had a chance to
> > rebind their VMAs.
> > 
> > >   static int
> > >   __drm_gpuva_insert(struct drm_gpuvm *gpuvm,
> > >  struct drm_gpuva *va)
> > > diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
> > > index afa50b9059a2..834bb6d6617e 100644
> > > --- a/include/drm/drm_gpuvm.h
> > > +++ b/include/drm/drm_gpuvm.h
> > > @@ -26,10 +26,12 @@
> > >    */
> > >   #include 
> > > +#include 
> > >   #include 
> > >   #include 
> > >   #include 
> > > +#include 
> > >   struct drm_gpuvm;
> > >   struct drm_gpuvm_bo;
> > > @@ -259,6 +261,38 @@ struct drm_gpuvm {
> > >    * space
> > >    */
> > >   struct dma_resv *resv;
> > > +
> > > +    /**
> > > + * @extobj: structure holding the extobj list
> > > + */
> > > +    struct {
> > > +    /**
> > > + * @list: _head storing _gpuvm_bos serving as
> > > + * external object
> > > + */
> > > +    struct list_head list;
> > > +
> > > +    /**
> > > + * @lock: spinlock to protect the extobj list
> > > + */
> > > +    spinlock_t lock;
> > > +    } extobj;
> > > +
> > > +    /**
> > > + * @evict: structure holding the evict list and evict list
> > > lock
> > > + */
> > > +    struct {
> > > +    /**
> > > + * @list: _head storing _gpuvm_bos currently
> > > being
> > > + * evicted
> > > + */
> > > +    struct list_head list;
> > > +
> > > +    /**
> > > + * @lock: spinlock to protect the evict list
> > > + */
> > > +    spinlock_t lock;
> > > +    } evict;
> > >   };
> > >   void 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-14 Thread Danilo Krummrich

On 9/14/23 19:13, Thomas Hellström wrote:

On Thu, 2023-09-14 at 17:27 +0200, Danilo Krummrich wrote:

On 9/14/23 13:32, Thomas Hellström wrote:


On 9/14/23 12:57, Danilo Krummrich wrote:

On 9/13/23 14:16, Danilo Krummrich wrote:




And validate() can remove it while still holding all dma-
resv locks,
neat!
However, what if two tasks are trying to lock the VA space
concurrently? What
do we do when the drm_gpuvm_bo's refcount drops to zero in
drm_gpuva_unlink()?
Are we guaranteed that at this point of time the
drm_gpuvm_bo is not
on the
evicted list? Because otherwise we would call
drm_gpuvm_bo_destroy()
with the
dma-resv lock held, which wouldn't be allowed, since
drm_gpuvm_bo_destroy()
might drop the last reference to the drm_gem_object and
hence we'd
potentially
free the dma-resv lock while holding it, at least if it's
an external
object.


Easiest way in this scheme is to think of the lists as being
protected
by the vm's resv lock. That means anybody calling unlink()
must also
hold the vm's resv lock. (Which is OK from an UAF point of
view, but
perhaps not from a locking inversion POW from an async list
update).


This would mean that on unlink() we'd need to hold the VM's
resv lock and the
corresponding GEM's resv lock (in case they're not the same
anyways) because the
VM's resv lock would protect the external / evicted object
lists and the GEM
objects resv lock protects the GEM's list of drm_gpuvm_bos and
the
drm_gpuvm_bo's list of drm_gpuvas.


As mentioned below the same applies for drm_gpuvm_bo_put() since
it might
destroy the vm_bo, which includes removing the vm_bo from
external / evicted
object lists and the GEMs list of vm_bos.

As mentioned, if the GEM's dma-resv is different from the VM's
dma-resv we need
to take both locks. Ultimately, this would mean we need a
drm_exec loop, because
we can't know the order in which to take these locks. Doing a
full drm_exec loop
just to put() a vm_bo doesn't sound reasonable to me.

Can we instead just have an internal mutex for locking the lists
such that we
avoid taking and dropping the spinlocks, which we use currently,
in a loop?


You'd have the same locking inversion problem with a mutex, right?
Since in the eviction path you have resv->mutex, from exec you have
resv->mutex->resv because validate would attempt to grab resv.


Both lists, evict and extobj, would need to have a separate mutex,
not a common one.
We'd also need a dedicated GEM gpuva lock. Then the only rule would
be that you can't
hold the dma-resv lock when calling put(). Which I admit is not that
nice.

With the current spinlock solution drivers wouldn't need to worry
about anything locking
related though. So maybe I come back to your proposal of having a
switch for external
locking with dma-resv locks entirely. Such that with external dma-
resv locking I skip
all the spinlocks and add lockdep checks instead.

I think that makes the most sense in terms of taking advantage of
external dma-resv locking
where possible and on the other hand having a self-contained solution
if not. This should
get all concerns out of the way, yours, Christian's and Boris'.


If we need additional locks yes, I'd prefer the opt-in/opt-out spinlock
solution, and check back after a while to see if we can remove either
option once most pitfalls are hit.


Sounds good, I'll prepare this for a V4.

- Danilo



Thanks,
/Thomas






That said, xe currently indeed does the vm+bo exec dance on vma
put.

One reason why that seemingly horrible construct is good, is that
when evicting an extobj and you need to access individual vmas to
Zap page table entries or TLB flush, those VMAs are not allowed to
go away (we're not refcounting them). Holding the bo resv on gpuva
put prevents that from happening. Possibly one could use another
mutex to protect the gem->vm_bo list to achieve the same, but we'd
need to hold it on gpuva put.

/Thomas




- Danilo









For extobjs an outer lock would be enough in case of
Xe, but I
really would not
like to add even more complexity just to get the
spinlock out of
the way in case
the driver already has an outer lock protecting this
path.


I must disagree here. These spinlocks and atomic
operations are
pretty
costly and as discussed earlier this type of locking was
the reason
(at
least according to the commit message) that made
Christian drop the
XArray
use in drm_exec for the same set of objects: "The locking
overhead
is
unecessary and measurable". IMHO the spinlock is the
added
complexity and a
single wide lock following the drm locking guidelines set
out by
Daniel and
David should really be the default choice with an opt-in
for a
spinlock if
needed for async and pushing out to a wq is not an
option.


For the external object list an outer lock would work as
long as it's
not the
dma-resv lock of the corresponding GEM object, since here
we actually
need to
remove the list entry from the external object list on
drm_gpuvm_bo_destroy().
It's just a bit weird design wise that drivers 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-14 Thread Thomas Hellström
On Thu, 2023-09-14 at 17:27 +0200, Danilo Krummrich wrote:
> On 9/14/23 13:32, Thomas Hellström wrote:
> > 
> > On 9/14/23 12:57, Danilo Krummrich wrote:
> > > On 9/13/23 14:16, Danilo Krummrich wrote:
> > > 
> > > 
> > > 
> > > > > > And validate() can remove it while still holding all dma-
> > > > > > resv locks,
> > > > > > neat!
> > > > > > However, what if two tasks are trying to lock the VA space
> > > > > > concurrently? What
> > > > > > do we do when the drm_gpuvm_bo's refcount drops to zero in
> > > > > > drm_gpuva_unlink()?
> > > > > > Are we guaranteed that at this point of time the
> > > > > > drm_gpuvm_bo is not
> > > > > > on the
> > > > > > evicted list? Because otherwise we would call
> > > > > > drm_gpuvm_bo_destroy()
> > > > > > with the
> > > > > > dma-resv lock held, which wouldn't be allowed, since
> > > > > > drm_gpuvm_bo_destroy()
> > > > > > might drop the last reference to the drm_gem_object and
> > > > > > hence we'd
> > > > > > potentially
> > > > > > free the dma-resv lock while holding it, at least if it's
> > > > > > an external
> > > > > > object.
> > > > > 
> > > > > Easiest way in this scheme is to think of the lists as being
> > > > > protected
> > > > > by the vm's resv lock. That means anybody calling unlink()
> > > > > must also
> > > > > hold the vm's resv lock. (Which is OK from an UAF point of
> > > > > view, but
> > > > > perhaps not from a locking inversion POW from an async list
> > > > > update).
> > > > 
> > > > This would mean that on unlink() we'd need to hold the VM's
> > > > resv lock and the
> > > > corresponding GEM's resv lock (in case they're not the same
> > > > anyways) because the
> > > > VM's resv lock would protect the external / evicted object
> > > > lists and the GEM
> > > > objects resv lock protects the GEM's list of drm_gpuvm_bos and
> > > > the
> > > > drm_gpuvm_bo's list of drm_gpuvas.
> > > 
> > > As mentioned below the same applies for drm_gpuvm_bo_put() since
> > > it might
> > > destroy the vm_bo, which includes removing the vm_bo from
> > > external / evicted
> > > object lists and the GEMs list of vm_bos.
> > > 
> > > As mentioned, if the GEM's dma-resv is different from the VM's
> > > dma-resv we need
> > > to take both locks. Ultimately, this would mean we need a
> > > drm_exec loop, because
> > > we can't know the order in which to take these locks. Doing a
> > > full drm_exec loop
> > > just to put() a vm_bo doesn't sound reasonable to me.
> > > 
> > > Can we instead just have an internal mutex for locking the lists
> > > such that we
> > > avoid taking and dropping the spinlocks, which we use currently,
> > > in a loop?
> > 
> > You'd have the same locking inversion problem with a mutex, right?
> > Since in the eviction path you have resv->mutex, from exec you have
> > resv->mutex->resv because validate would attempt to grab resv.
> 
> Both lists, evict and extobj, would need to have a separate mutex,
> not a common one.
> We'd also need a dedicated GEM gpuva lock. Then the only rule would
> be that you can't
> hold the dma-resv lock when calling put(). Which I admit is not that
> nice.
> 
> With the current spinlock solution drivers wouldn't need to worry
> about anything locking
> related though. So maybe I come back to your proposal of having a
> switch for external
> locking with dma-resv locks entirely. Such that with external dma-
> resv locking I skip
> all the spinlocks and add lockdep checks instead.
> 
> I think that makes the most sense in terms of taking advantage of
> external dma-resv locking
> where possible and on the other hand having a self-contained solution
> if not. This should
> get all concerns out of the way, yours, Christian's and Boris'.

If we need additional locks yes, I'd prefer the opt-in/opt-out spinlock
solution, and check back after a while to see if we can remove either
option once most pitfalls are hit.

Thanks,
/Thomas


> 
> > 
> > That said, xe currently indeed does the vm+bo exec dance on vma
> > put.
> > 
> > One reason why that seemingly horrible construct is good, is that
> > when evicting an extobj and you need to access individual vmas to
> > Zap page table entries or TLB flush, those VMAs are not allowed to
> > go away (we're not refcounting them). Holding the bo resv on gpuva
> > put prevents that from happening. Possibly one could use another
> > mutex to protect the gem->vm_bo list to achieve the same, but we'd
> > need to hold it on gpuva put.
> > 
> > /Thomas
> > 
> > 
> > > 
> > > - Danilo
> > > 
> > > > 
> > > > > 
> > > > > > 
> > > > > > > > 
> > > > > > > > For extobjs an outer lock would be enough in case of
> > > > > > > > Xe, but I
> > > > > > > > really would not
> > > > > > > > like to add even more complexity just to get the
> > > > > > > > spinlock out of
> > > > > > > > the way in case
> > > > > > > > the driver already has an outer lock protecting this
> > > > > > > > path.
> > > > > > > 
> > > > > > > I must disagree here. These spinlocks and atomic
> > > > > > > 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-14 Thread Danilo Krummrich

On 9/14/23 15:48, Thomas Hellström wrote:

Hi, Danilo

Some additional minor comments as xe conversion progresses.

On 9/9/23 17:31, Danilo Krummrich wrote:

So far the DRM GPUVA manager offers common infrastructure to track GPU VA
allocations and mappings, generically connect GPU VA mappings to their
backing buffers and perform more complex mapping operations on the GPU VA
space.

However, there are more design patterns commonly used by drivers, which
can potentially be generalized in order to make the DRM GPUVA manager
represent a basic GPU-VM implementation. In this context, this patch aims
at generalizing the following elements.

1) Provide a common dma-resv for GEM objects not being used outside of
    this GPU-VM.

2) Provide tracking of external GEM objects (GEM objects which are
    shared with other GPU-VMs).

3) Provide functions to efficiently lock all GEM objects dma-resv the
    GPU-VM contains mappings of.

4) Provide tracking of evicted GEM objects the GPU-VM contains mappings
    of, such that validation of evicted GEM objects is accelerated.

5) Provide some convinience functions for common patterns.

Rather than being designed as a "framework", the target is to make all
features appear as a collection of optional helper functions, such that
drivers are free to make use of the DRM GPUVA managers basic
functionality and opt-in for other features without setting any feature
flags, just by making use of the corresponding functions.

Big kudos to Boris Brezillon for his help to figure out locking for drivers
updating the GPU VA space within the fence signalling path.

Suggested-by: Matthew Brost 
Signed-off-by: Danilo Krummrich 
---

+/**
+ * drm_gpuvm_bo_evict() - add / remove a _gem_object to / from a
+ * _gpuvms evicted list
+ * @obj: the _gem_object to add or remove
+ * @evict: indicates whether the object is evicted
+ *
+ * Adds a _gem_object to or removes it from all _gpuvms evicted
+ * list containing a mapping of this _gem_object.
+ */
+void
+drm_gpuvm_bo_evict(struct drm_gem_object *obj, bool evict)
+{
+    struct drm_gpuvm_bo *vm_bo;
+
+    drm_gem_for_each_gpuvm_bo(vm_bo, obj) {
+    if (evict)
+    drm_gpuvm_bo_list_add(vm_bo, evict);
+    else
+    drm_gpuvm_bo_list_del(vm_bo, evict);
+    }
+}
+EXPORT_SYMBOL_GPL(drm_gpuvm_bo_evict);
+


We need a drm_gpuvm_bo_evict(struct drm_gpuvm_bo *vm_bo, ...) that puts a 
single gpuvm_bo on the list, the above function could perhaps be renamed as 
drm_gpuvm_gem_obj_evict(obj, ).


Makes sense - gonna change that.



Reason is some vm's are faulting vms which don't have an evict list, but 
validate from the pagefault handler. Also evict == false is dangerous because 
if called from within an exec, it might remove the obj from other vm's evict 
list before they've had a chance to rebind their VMAs.


  static int
  __drm_gpuva_insert(struct drm_gpuvm *gpuvm,
 struct drm_gpuva *va)
diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
index afa50b9059a2..834bb6d6617e 100644
--- a/include/drm/drm_gpuvm.h
+++ b/include/drm/drm_gpuvm.h
@@ -26,10 +26,12 @@
   */
  #include 
+#include 
  #include 
  #include 
  #include 
+#include 
  struct drm_gpuvm;
  struct drm_gpuvm_bo;
@@ -259,6 +261,38 @@ struct drm_gpuvm {
   * space
   */
  struct dma_resv *resv;
+
+    /**
+ * @extobj: structure holding the extobj list
+ */
+    struct {
+    /**
+ * @list: _head storing _gpuvm_bos serving as
+ * external object
+ */
+    struct list_head list;
+
+    /**
+ * @lock: spinlock to protect the extobj list
+ */
+    spinlock_t lock;
+    } extobj;
+
+    /**
+ * @evict: structure holding the evict list and evict list lock
+ */
+    struct {
+    /**
+ * @list: _head storing _gpuvm_bos currently being
+ * evicted
+ */
+    struct list_head list;
+
+    /**
+ * @lock: spinlock to protect the evict list
+ */
+    spinlock_t lock;
+    } evict;
  };
  void drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct drm_device *drm,
@@ -268,6 +302,21 @@ void drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct 
drm_device *drm,
  const struct drm_gpuvm_ops *ops);
  void drm_gpuvm_destroy(struct drm_gpuvm *gpuvm);
+/**
+ * drm_gpuvm_is_extobj() - indicates whether the given _gem_object is an
+ * external object
+ * @gpuvm: the _gpuvm to check
+ * @obj: the _gem_object to check
+ *
+ * Returns: true if the _gem_object _resv differs from the
+ * _gpuvms _resv, false otherwise
+ */
+static inline bool drm_gpuvm_is_extobj(struct drm_gpuvm *gpuvm,
+   struct drm_gem_object *obj)
+{
+    return obj && obj->resv != gpuvm->resv;
+}
+
  static inline struct drm_gpuva *
  __drm_gpuva_next(struct drm_gpuva *va)
  {
@@ -346,6 +395,128 @@ __drm_gpuva_next(struct drm_gpuva *va)
  #define drm_gpuvm_for_each_va_safe(va__, next__, gpuvm__) \
  list_for_each_entry_safe(va__, 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-14 Thread Danilo Krummrich

On 9/14/23 13:32, Thomas Hellström wrote:


On 9/14/23 12:57, Danilo Krummrich wrote:

On 9/13/23 14:16, Danilo Krummrich wrote:




And validate() can remove it while still holding all dma-resv locks,
neat!
However, what if two tasks are trying to lock the VA space
concurrently? What
do we do when the drm_gpuvm_bo's refcount drops to zero in
drm_gpuva_unlink()?
Are we guaranteed that at this point of time the drm_gpuvm_bo is not
on the
evicted list? Because otherwise we would call drm_gpuvm_bo_destroy()
with the
dma-resv lock held, which wouldn't be allowed, since
drm_gpuvm_bo_destroy()
might drop the last reference to the drm_gem_object and hence we'd
potentially
free the dma-resv lock while holding it, at least if it's an external
object.


Easiest way in this scheme is to think of the lists as being protected
by the vm's resv lock. That means anybody calling unlink() must also
hold the vm's resv lock. (Which is OK from an UAF point of view, but
perhaps not from a locking inversion POW from an async list update).


This would mean that on unlink() we'd need to hold the VM's resv lock and the
corresponding GEM's resv lock (in case they're not the same anyways) because the
VM's resv lock would protect the external / evicted object lists and the GEM
objects resv lock protects the GEM's list of drm_gpuvm_bos and the
drm_gpuvm_bo's list of drm_gpuvas.


As mentioned below the same applies for drm_gpuvm_bo_put() since it might
destroy the vm_bo, which includes removing the vm_bo from external / evicted
object lists and the GEMs list of vm_bos.

As mentioned, if the GEM's dma-resv is different from the VM's dma-resv we need
to take both locks. Ultimately, this would mean we need a drm_exec loop, because
we can't know the order in which to take these locks. Doing a full drm_exec loop
just to put() a vm_bo doesn't sound reasonable to me.

Can we instead just have an internal mutex for locking the lists such that we
avoid taking and dropping the spinlocks, which we use currently, in a loop?


You'd have the same locking inversion problem with a mutex, right? Since in the eviction 
path you have resv->mutex, from exec you have resv->mutex->resv because 
validate would attempt to grab resv.


Both lists, evict and extobj, would need to have a separate mutex, not a common 
one.
We'd also need a dedicated GEM gpuva lock. Then the only rule would be that you 
can't
hold the dma-resv lock when calling put(). Which I admit is not that nice.

With the current spinlock solution drivers wouldn't need to worry about 
anything locking
related though. So maybe I come back to your proposal of having a switch for 
external
locking with dma-resv locks entirely. Such that with external dma-resv locking 
I skip
all the spinlocks and add lockdep checks instead.

I think that makes the most sense in terms of taking advantage of external 
dma-resv locking
where possible and on the other hand having a self-contained solution if not. 
This should
get all concerns out of the way, yours, Christian's and Boris'.



That said, xe currently indeed does the vm+bo exec dance on vma put.

One reason why that seemingly horrible construct is good, is that when evicting an 
extobj and you need to access individual vmas to Zap page table entries or TLB 
flush, those VMAs are not allowed to go away (we're not refcounting them). Holding 
the bo resv on gpuva put prevents that from happening. Possibly one could use 
another mutex to protect the gem->vm_bo list to achieve the same, but we'd need 
to hold it on gpuva put.

/Thomas




- Danilo









For extobjs an outer lock would be enough in case of Xe, but I
really would not
like to add even more complexity just to get the spinlock out of
the way in case
the driver already has an outer lock protecting this path.


I must disagree here. These spinlocks and atomic operations are
pretty
costly and as discussed earlier this type of locking was the reason
(at
least according to the commit message) that made Christian drop the
XArray
use in drm_exec for the same set of objects: "The locking overhead
is
unecessary and measurable". IMHO the spinlock is the added
complexity and a
single wide lock following the drm locking guidelines set out by
Daniel and
David should really be the default choice with an opt-in for a
spinlock if
needed for async and pushing out to a wq is not an option.


For the external object list an outer lock would work as long as it's
not the
dma-resv lock of the corresponding GEM object, since here we actually
need to
remove the list entry from the external object list on
drm_gpuvm_bo_destroy().
It's just a bit weird design wise that drivers would need to take
this outer
lock on:

- drm_gpuvm_bo_extobj_add()
- drm_gpuvm_bo_destroy()(and hence also drm_gpuvm_bo_put())
- drm_gpuva_unlink()(because it needs to call
drm_gpuvm_bo_put())
- drm_gpuvm_exec_lock()
- drm_gpuvm_exec_lock_array()
- drm_gpuvm_prepare_range()

Given that it seems reasonable to do all the 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-14 Thread Thomas Hellström

Hi,

On 9/14/23 13:54, Boris Brezillon wrote:

On Thu, 14 Sep 2023 12:45:44 +0200
Thomas Hellström  wrote:


On 9/14/23 10:20, Boris Brezillon wrote:

On Wed, 13 Sep 2023 15:22:56 +0200
Thomas Hellström  wrote:
  

On 9/13/23 13:33, Boris Brezillon wrote:

On Wed, 13 Sep 2023 12:39:01 +0200
Thomas Hellström  wrote:
 

Hi,

On 9/13/23 09:19, Boris Brezillon wrote:

On Wed, 13 Sep 2023 17:05:42 +1000
Dave Airlie  wrote:


On Wed, 13 Sept 2023 at 17:03, Boris Brezillon
 wrote:

On Tue, 12 Sep 2023 18:20:32 +0200
Thomas Hellström  wrote:
   

+/**
+ * get_next_vm_bo_from_list() - get the next vm_bo element
+ * @__gpuvm: The GPU VM
+ * @__list_name: The name of the list we're iterating on
+ * @__local_list: A pointer to the local list used to store already iterated 
items
+ * @__prev_vm_bo: The previous element we got from 
drm_gpuvm_get_next_cached_vm_bo()
+ *
+ * This helper is here to provide lockless list iteration. Lockless as in, the
+ * iterator releases the lock immediately after picking the first element from
+ * the list, so list insertion deletion can happen concurrently.

Are the list spinlocks needed for that async state update from within
the dma-fence critical section we've discussed previously?

Any driver calling _[un]link() from its drm_gpu_scheduler::run_job()
hook will be in this situation (Panthor at the moment, PowerVR soon). I
get that Xe and Nouveau don't need that because they update the VM
state early (in the ioctl path), but I keep thinking this will hurt us
if we don't think it through from the beginning, because once you've
set this logic to depend only on resv locks, it will be pretty hard to
get back to a solution which lets synchronous VM_BINDs take precedence
on asynchronous request, and, with vkQueueBindSparse() passing external
deps (plus the fact the VM_BIND queue might be pretty deep), it can
take a long time to get your synchronous VM_BIND executed...

So this would boil down to either (possibly opt-in) keeping the spinlock
approach or pushing the unlink out to a wq then?

Deferred _unlink() would not be an issue, since I already defer the
drm_gpuva destruction to a wq, it would just a be a matter of moving the
_unlink() call there as well. But _link() also takes the GEM gpuva list
lock, and that one is bit tricky, in that sm_map() can trigger 2 more
_link() calls for the prev/next mappings, which we can't guess until we
get to execute the VM update. If we mandate the use of the GEM resv
lock, that simply means async VM updates (AKA calling
drm_gpuvm_sm_[un]map()) are not an option. And if this is what everyone
agrees on, then I'd like the APIs that make this sort of async VM
update possible (drm_gpuvm_sm_[un]map(), the drm_gpuvm_ops::sm_step*
methods, and probably other things) to be dropped, so we don't make it
look like it's something we support.
 

BTW, as also asked in a reply to Danilo, how do you call unlink from
run_job() when it was requiring the obj->dma_resv lock, or was that a WIP?

_unlink() makes sure the GEM gpuva list lock is taken, but this can be
a custom lock (see drm_gem_gpuva_set_lock()). In panthor we have
panthor_gem_object::gpuva_list_lock that's dedicated the gpuva list
protection. We make sure we never take this lock while allocating
memory to guarantee the dma-signalling path can't deadlock.
 
   

btw what is the use case for this? do we have actual vulkan
applications we know will have problems here?

I don't, but I think that's a concern Faith raised at some point (dates
back from when I was reading threads describing how VM_BIND on i915
should work, and I was clearly discovering this whole VM_BIND thing at
that time, so maybe I misunderstood).


it feels like a bit of premature optimisation, but maybe we have use cases.

Might be, but that's the sort of thing that would put us in a corner if
we don't have a plan for when the needs arise. Besides, if we don't
want to support that case because it's too complicated, I'd recommend
dropping all the drm_gpuvm APIs that let people think this mode is
valid/supported (map/remap/unmap hooks in drm_gpuvm_ops,
drm_gpuvm_sm_[un]map helpers, etc). Keeping them around just adds to the
confusion.

Xe allows bypassing the bind-queue with another bind-queue, but to
completely avoid dependencies between queues the Operations may not
overlap.

So, you check the VM state with some VM lock held (would be the VM resv
in my case), and if the mapping is new (no overlaps with pre-existing
mappings), you queue it to the fast-track/sync-VM_BIND queue. What would
be missing I guess is a way to know if the mapping is active (MMU has
been updated) or pending (MMU update queued to the bind-queue), so I can
fast-track mapping/unmapping of active mappings.

Ok, so I started modifying the implementation, and quickly realized the
overlap test can't be done without your xe_range_fence tree because of
unmaps. Since we call drm_gpuva_unmap() early/in the IOCTL path (IOW,
before the 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-14 Thread Thomas Hellström

Hi, Danilo

Some additional minor comments as xe conversion progresses.

On 9/9/23 17:31, Danilo Krummrich wrote:

So far the DRM GPUVA manager offers common infrastructure to track GPU VA
allocations and mappings, generically connect GPU VA mappings to their
backing buffers and perform more complex mapping operations on the GPU VA
space.

However, there are more design patterns commonly used by drivers, which
can potentially be generalized in order to make the DRM GPUVA manager
represent a basic GPU-VM implementation. In this context, this patch aims
at generalizing the following elements.

1) Provide a common dma-resv for GEM objects not being used outside of
this GPU-VM.

2) Provide tracking of external GEM objects (GEM objects which are
shared with other GPU-VMs).

3) Provide functions to efficiently lock all GEM objects dma-resv the
GPU-VM contains mappings of.

4) Provide tracking of evicted GEM objects the GPU-VM contains mappings
of, such that validation of evicted GEM objects is accelerated.

5) Provide some convinience functions for common patterns.

Rather than being designed as a "framework", the target is to make all
features appear as a collection of optional helper functions, such that
drivers are free to make use of the DRM GPUVA managers basic
functionality and opt-in for other features without setting any feature
flags, just by making use of the corresponding functions.

Big kudos to Boris Brezillon for his help to figure out locking for drivers
updating the GPU VA space within the fence signalling path.

Suggested-by: Matthew Brost 
Signed-off-by: Danilo Krummrich 
---

+/**
+ * drm_gpuvm_bo_evict() - add / remove a _gem_object to / from a
+ * _gpuvms evicted list
+ * @obj: the _gem_object to add or remove
+ * @evict: indicates whether the object is evicted
+ *
+ * Adds a _gem_object to or removes it from all _gpuvms evicted
+ * list containing a mapping of this _gem_object.
+ */
+void
+drm_gpuvm_bo_evict(struct drm_gem_object *obj, bool evict)
+{
+   struct drm_gpuvm_bo *vm_bo;
+
+   drm_gem_for_each_gpuvm_bo(vm_bo, obj) {
+   if (evict)
+   drm_gpuvm_bo_list_add(vm_bo, evict);
+   else
+   drm_gpuvm_bo_list_del(vm_bo, evict);
+   }
+}
+EXPORT_SYMBOL_GPL(drm_gpuvm_bo_evict);
+


We need a drm_gpuvm_bo_evict(struct drm_gpuvm_bo *vm_bo, ...) that puts 
a single gpuvm_bo on the list, the above function could perhaps be 
renamed as drm_gpuvm_gem_obj_evict(obj, ).


Reason is some vm's are faulting vms which don't have an evict list, but 
validate from the pagefault handler. Also evict == false is dangerous 
because if called from within an exec, it might remove the obj from 
other vm's evict list before they've had a chance to rebind their VMAs.



  static int
  __drm_gpuva_insert(struct drm_gpuvm *gpuvm,
   struct drm_gpuva *va)
diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h
index afa50b9059a2..834bb6d6617e 100644
--- a/include/drm/drm_gpuvm.h
+++ b/include/drm/drm_gpuvm.h
@@ -26,10 +26,12 @@
   */
  
  #include 

+#include 
  #include 
  #include 
  
  #include 

+#include 
  
  struct drm_gpuvm;

  struct drm_gpuvm_bo;
@@ -259,6 +261,38 @@ struct drm_gpuvm {
 * space
 */
struct dma_resv *resv;
+
+   /**
+* @extobj: structure holding the extobj list
+*/
+   struct {
+   /**
+* @list: _head storing _gpuvm_bos serving as
+* external object
+*/
+   struct list_head list;
+
+   /**
+* @lock: spinlock to protect the extobj list
+*/
+   spinlock_t lock;
+   } extobj;
+
+   /**
+* @evict: structure holding the evict list and evict list lock
+*/
+   struct {
+   /**
+* @list: _head storing _gpuvm_bos currently being
+* evicted
+*/
+   struct list_head list;
+
+   /**
+* @lock: spinlock to protect the evict list
+*/
+   spinlock_t lock;
+   } evict;
  };
  
  void drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct drm_device *drm,

@@ -268,6 +302,21 @@ void drm_gpuvm_init(struct drm_gpuvm *gpuvm, struct 
drm_device *drm,
const struct drm_gpuvm_ops *ops);
  void drm_gpuvm_destroy(struct drm_gpuvm *gpuvm);
  
+/**

+ * drm_gpuvm_is_extobj() - indicates whether the given _gem_object is an
+ * external object
+ * @gpuvm: the _gpuvm to check
+ * @obj: the _gem_object to check
+ *
+ * Returns: true if the _gem_object _resv differs from the
+ * _gpuvms _resv, false otherwise
+ */
+static inline bool drm_gpuvm_is_extobj(struct drm_gpuvm *gpuvm,
+  struct drm_gem_object *obj)
+{
+   return obj && obj->resv != gpuvm->resv;
+}
+
  static inline struct drm_gpuva *
  __drm_gpuva_next(struct 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-14 Thread Thomas Hellström



On 9/14/23 12:57, Danilo Krummrich wrote:

On 9/13/23 14:16, Danilo Krummrich wrote:




And validate() can remove it while still holding all dma-resv locks,
neat!
However, what if two tasks are trying to lock the VA space
concurrently? What
do we do when the drm_gpuvm_bo's refcount drops to zero in
drm_gpuva_unlink()?
Are we guaranteed that at this point of time the drm_gpuvm_bo is not
on the
evicted list? Because otherwise we would call drm_gpuvm_bo_destroy()
with the
dma-resv lock held, which wouldn't be allowed, since
drm_gpuvm_bo_destroy()
might drop the last reference to the drm_gem_object and hence we'd
potentially
free the dma-resv lock while holding it, at least if it's an external
object.


Easiest way in this scheme is to think of the lists as being protected
by the vm's resv lock. That means anybody calling unlink() must also
hold the vm's resv lock. (Which is OK from an UAF point of view, but
perhaps not from a locking inversion POW from an async list update).


This would mean that on unlink() we'd need to hold the VM's resv lock 
and the
corresponding GEM's resv lock (in case they're not the same anyways) 
because the
VM's resv lock would protect the external / evicted object lists and 
the GEM

objects resv lock protects the GEM's list of drm_gpuvm_bos and the
drm_gpuvm_bo's list of drm_gpuvas.


As mentioned below the same applies for drm_gpuvm_bo_put() since it might
destroy the vm_bo, which includes removing the vm_bo from external / 
evicted

object lists and the GEMs list of vm_bos.

As mentioned, if the GEM's dma-resv is different from the VM's 
dma-resv we need
to take both locks. Ultimately, this would mean we need a drm_exec 
loop, because
we can't know the order in which to take these locks. Doing a full 
drm_exec loop

just to put() a vm_bo doesn't sound reasonable to me.

Can we instead just have an internal mutex for locking the lists such 
that we
avoid taking and dropping the spinlocks, which we use currently, in a 
loop?


You'd have the same locking inversion problem with a mutex, right? Since 
in the eviction path you have resv->mutex, from exec you have 
resv->mutex->resv because validate would attempt to grab resv.


That said, xe currently indeed does the vm+bo exec dance on vma put.

One reason why that seemingly horrible construct is good, is that when 
evicting an extobj and you need to access individual vmas to Zap page 
table entries or TLB flush, those VMAs are not allowed to go away (we're 
not refcounting them). Holding the bo resv on gpuva put prevents that 
from happening. Possibly one could use another mutex to protect the 
gem->vm_bo list to achieve the same, but we'd need to hold it on gpuva put.


/Thomas




- Danilo









For extobjs an outer lock would be enough in case of Xe, but I
really would not
like to add even more complexity just to get the spinlock out of
the way in case
the driver already has an outer lock protecting this path.


I must disagree here. These spinlocks and atomic operations are
pretty
costly and as discussed earlier this type of locking was the reason
(at
least according to the commit message) that made Christian drop the
XArray
use in drm_exec for the same set of objects: "The locking overhead
is
unecessary and measurable". IMHO the spinlock is the added
complexity and a
single wide lock following the drm locking guidelines set out by
Daniel and
David should really be the default choice with an opt-in for a
spinlock if
needed for async and pushing out to a wq is not an option.


For the external object list an outer lock would work as long as it's
not the
dma-resv lock of the corresponding GEM object, since here we actually
need to
remove the list entry from the external object list on
drm_gpuvm_bo_destroy().
It's just a bit weird design wise that drivers would need to take
this outer
lock on:

- drm_gpuvm_bo_extobj_add()
- drm_gpuvm_bo_destroy()(and hence also drm_gpuvm_bo_put())
- drm_gpuva_unlink()(because it needs to call
drm_gpuvm_bo_put())
- drm_gpuvm_exec_lock()
- drm_gpuvm_exec_lock_array()
- drm_gpuvm_prepare_range()

Given that it seems reasonable to do all the required locking
internally.


 From a design POW, there has been a clear direction in XE to make
things similar to mmap() / munmap(), so this outer lock, which in Xe is
an rwsem, is used in a similar way as the mmap_lock. It's protecting
the page-table structures and vma rb tree, the userptr structures and
the extobj list. Basically it's taken early in the exec IOCTL, the
VM_BIND ioctl, the compute rebind worker and the pagefault handler, so
all of the above are just asserting that it is taken in the correct
mode.

But strictly with this scheme one could also use the vm's dma_resv for
the extobj list since with drm_exec, it's locked before traversing the
list.

The whole point of this scheme is to rely on locks that you already are
supposed to be holding for various reasons and is simple to comprehend.


I don't agree that we're 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-14 Thread Danilo Krummrich

On 9/13/23 14:16, Danilo Krummrich wrote:




And validate() can remove it while still holding all dma-resv locks,
neat!
However, what if two tasks are trying to lock the VA space
concurrently? What
do we do when the drm_gpuvm_bo's refcount drops to zero in
drm_gpuva_unlink()?
Are we guaranteed that at this point of time the drm_gpuvm_bo is not
on the
evicted list? Because otherwise we would call drm_gpuvm_bo_destroy()
with the
dma-resv lock held, which wouldn't be allowed, since
drm_gpuvm_bo_destroy()
might drop the last reference to the drm_gem_object and hence we'd
potentially
free the dma-resv lock while holding it, at least if it's an external
object.


Easiest way in this scheme is to think of the lists as being protected
by the vm's resv lock. That means anybody calling unlink() must also
hold the vm's resv lock. (Which is OK from an UAF point of view, but
perhaps not from a locking inversion POW from an async list update).


This would mean that on unlink() we'd need to hold the VM's resv lock and the
corresponding GEM's resv lock (in case they're not the same anyways) because the
VM's resv lock would protect the external / evicted object lists and the GEM
objects resv lock protects the GEM's list of drm_gpuvm_bos and the
drm_gpuvm_bo's list of drm_gpuvas.


As mentioned below the same applies for drm_gpuvm_bo_put() since it might
destroy the vm_bo, which includes removing the vm_bo from external / evicted
object lists and the GEMs list of vm_bos.

As mentioned, if the GEM's dma-resv is different from the VM's dma-resv we need
to take both locks. Ultimately, this would mean we need a drm_exec loop, because
we can't know the order in which to take these locks. Doing a full drm_exec loop
just to put() a vm_bo doesn't sound reasonable to me.

Can we instead just have an internal mutex for locking the lists such that we
avoid taking and dropping the spinlocks, which we use currently, in a loop?

- Danilo









For extobjs an outer lock would be enough in case of Xe, but I
really would not
like to add even more complexity just to get the spinlock out of
the way in case
the driver already has an outer lock protecting this path.


I must disagree here. These spinlocks and atomic operations are
pretty
costly and as discussed earlier this type of locking was the reason
(at
least according to the commit message) that made Christian drop the
XArray
use in drm_exec for the same set of objects: "The locking overhead
is
unecessary and measurable". IMHO the spinlock is the added
complexity and a
single wide lock following the drm locking guidelines set out by
Daniel and
David should really be the default choice with an opt-in for a
spinlock if
needed for async and pushing out to a wq is not an option.


For the external object list an outer lock would work as long as it's
not the
dma-resv lock of the corresponding GEM object, since here we actually
need to
remove the list entry from the external object list on
drm_gpuvm_bo_destroy().
It's just a bit weird design wise that drivers would need to take
this outer
lock on:

- drm_gpuvm_bo_extobj_add()
- drm_gpuvm_bo_destroy()(and hence also drm_gpuvm_bo_put())
- drm_gpuva_unlink()(because it needs to call
drm_gpuvm_bo_put())
- drm_gpuvm_exec_lock()
- drm_gpuvm_exec_lock_array()
- drm_gpuvm_prepare_range()

Given that it seems reasonable to do all the required locking
internally.


 From a design POW, there has been a clear direction in XE to make
things similar to mmap() / munmap(), so this outer lock, which in Xe is
an rwsem, is used in a similar way as the mmap_lock. It's protecting
the page-table structures and vma rb tree, the userptr structures and
the extobj list. Basically it's taken early in the exec IOCTL, the
VM_BIND ioctl, the compute rebind worker and the pagefault handler, so
all of the above are just asserting that it is taken in the correct
mode.

But strictly with this scheme one could also use the vm's dma_resv for
the extobj list since with drm_exec, it's locked before traversing the
list.

The whole point of this scheme is to rely on locks that you already are
supposed to be holding for various reasons and is simple to comprehend.


I don't agree that we're supposed to hold the VM's resv lock anyways for
functions like drm_gpuvm_bo_put() or drm_gpuva_unlink(), but I'm fine using it
for that purpose nevertheless.





In order to at least place lockdep checks, the driver would need to
supply the
corresponding lock's lockdep_map, because the GPUVM otherwise doesn't
know about
the lock.


Yes, that sounds reasonable. One lockdep map per list.


I'd really like to avoid that, especially now that everything got simpler. We
should define the actual locks to take instead.





Out of curiosity, what is the overhead of a spin_lock() that doesn't
need to
spin?


I guess it's hard to tell exactly, but it is much lower on modern x86
than what it used to be. Not sure about ARM, which is the other
architecture important to us. I figure 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-14 Thread Thomas Hellström



On 9/14/23 10:20, Boris Brezillon wrote:

On Wed, 13 Sep 2023 15:22:56 +0200
Thomas Hellström  wrote:


On 9/13/23 13:33, Boris Brezillon wrote:

On Wed, 13 Sep 2023 12:39:01 +0200
Thomas Hellström  wrote:
  

Hi,

On 9/13/23 09:19, Boris Brezillon wrote:

On Wed, 13 Sep 2023 17:05:42 +1000
Dave Airlie  wrote:
 

On Wed, 13 Sept 2023 at 17:03, Boris Brezillon
 wrote:

On Tue, 12 Sep 2023 18:20:32 +0200
Thomas Hellström  wrote:


+/**
+ * get_next_vm_bo_from_list() - get the next vm_bo element
+ * @__gpuvm: The GPU VM
+ * @__list_name: The name of the list we're iterating on
+ * @__local_list: A pointer to the local list used to store already iterated 
items
+ * @__prev_vm_bo: The previous element we got from 
drm_gpuvm_get_next_cached_vm_bo()
+ *
+ * This helper is here to provide lockless list iteration. Lockless as in, the
+ * iterator releases the lock immediately after picking the first element from
+ * the list, so list insertion deletion can happen concurrently.

Are the list spinlocks needed for that async state update from within
the dma-fence critical section we've discussed previously?

Any driver calling _[un]link() from its drm_gpu_scheduler::run_job()
hook will be in this situation (Panthor at the moment, PowerVR soon). I
get that Xe and Nouveau don't need that because they update the VM
state early (in the ioctl path), but I keep thinking this will hurt us
if we don't think it through from the beginning, because once you've
set this logic to depend only on resv locks, it will be pretty hard to
get back to a solution which lets synchronous VM_BINDs take precedence
on asynchronous request, and, with vkQueueBindSparse() passing external
deps (plus the fact the VM_BIND queue might be pretty deep), it can
take a long time to get your synchronous VM_BIND executed...

So this would boil down to either (possibly opt-in) keeping the spinlock
approach or pushing the unlink out to a wq then?

Deferred _unlink() would not be an issue, since I already defer the
drm_gpuva destruction to a wq, it would just a be a matter of moving the
_unlink() call there as well. But _link() also takes the GEM gpuva list
lock, and that one is bit tricky, in that sm_map() can trigger 2 more
_link() calls for the prev/next mappings, which we can't guess until we
get to execute the VM update. If we mandate the use of the GEM resv
lock, that simply means async VM updates (AKA calling
drm_gpuvm_sm_[un]map()) are not an option. And if this is what everyone
agrees on, then I'd like the APIs that make this sort of async VM
update possible (drm_gpuvm_sm_[un]map(), the drm_gpuvm_ops::sm_step*
methods, and probably other things) to be dropped, so we don't make it
look like it's something we support.
  

BTW, as also asked in a reply to Danilo, how do you call unlink from
run_job() when it was requiring the obj->dma_resv lock, or was that a WIP?

_unlink() makes sure the GEM gpuva list lock is taken, but this can be
a custom lock (see drm_gem_gpuva_set_lock()). In panthor we have
panthor_gem_object::gpuva_list_lock that's dedicated the gpuva list
protection. We make sure we never take this lock while allocating
memory to guarantee the dma-signalling path can't deadlock.
  


btw what is the use case for this? do we have actual vulkan
applications we know will have problems here?

I don't, but I think that's a concern Faith raised at some point (dates
back from when I was reading threads describing how VM_BIND on i915
should work, and I was clearly discovering this whole VM_BIND thing at
that time, so maybe I misunderstood).
 

it feels like a bit of premature optimisation, but maybe we have use cases.

Might be, but that's the sort of thing that would put us in a corner if
we don't have a plan for when the needs arise. Besides, if we don't
want to support that case because it's too complicated, I'd recommend
dropping all the drm_gpuvm APIs that let people think this mode is
valid/supported (map/remap/unmap hooks in drm_gpuvm_ops,
drm_gpuvm_sm_[un]map helpers, etc). Keeping them around just adds to the
confusion.

Xe allows bypassing the bind-queue with another bind-queue, but to
completely avoid dependencies between queues the Operations may not
overlap.

So, you check the VM state with some VM lock held (would be the VM resv
in my case), and if the mapping is new (no overlaps with pre-existing
mappings), you queue it to the fast-track/sync-VM_BIND queue. What would
be missing I guess is a way to know if the mapping is active (MMU has
been updated) or pending (MMU update queued to the bind-queue), so I can
fast-track mapping/unmapping of active mappings.

Ok, so I started modifying the implementation, and quickly realized the
overlap test can't be done without your xe_range_fence tree because of
unmaps. Since we call drm_gpuva_unmap() early/in the IOCTL path (IOW,
before the mapping teardown is effective), we lose track of this
yet-to-be-executed-unmap operation, and if we do our

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-13 Thread Danilo Krummrich

On 9/13/23 17:33, Christian König wrote:

Am 13.09.23 um 17:15 schrieb Danilo Krummrich:

On 9/13/23 16:26, Christian König wrote:

Am 13.09.23 um 14:16 schrieb Danilo Krummrich:

As mentioned in a different mail thread, the reply is based on the assumption
that we don't support anything else than GPUVM updates from the IOCTL.


I think that this assumption is incorrect.


Well, more precisely I should have said "don't support GPUVM updated from within
fence signaling critical sections". And looking at the code, that doesn't seem 
what
you're doing there.



Vulkan is just once specific use case, but this here should probably be able to 
handle other use cases as well.

Especially with HMM you get the requirement that you need to be able to 
invalidate GPUVM mappings without grabbing a reservation lock.


What do you mean with "invalidate GPUVM mappings" in this context? 
drm_gpuvm_bo_evict()
should only be called from a ttm_device_funcs::move callback, we should hold 
the dma-resv
lock there.


Well the question is which dma-resv lock do we hold?

In the move callback we only hold the dma-resv lock of the BO which is moved, 
but when that is a shared BO then that's not the same as the one for the VM.


Correct, Thomas' idea was to use the GEM's dma_resv lock to protect 
drm_gpuvm_bo::evicted
and then actually move the drm_gpuvm_bo to the VM's evicted list once we 
grabbed all
dma-resv locks when locking the VM's BOs using drm_exec. We can remove them 
from the evicted
list on validate(). This way we never touch the evicted list without holding at 
least the VM's
dma-resv lock.

Do you have any concerns about that?







See what the eviction lock in amdgpu is doing for example.


The eviction_lock seems to protect a VM state "evicting" of whether any BO that
is associated with the VM is currently evicting. At the same time amdgpu 
protects
the eviceted list of the VM with a different lock. So this seems to be entirely
unrelated. Tracking a "currently evicting" state is not part of the GPUVM
implementation currently and hence nothing would change for amdgpu there.


Sorry for the confusion we use different terminology in amdgpu.

The eviction lock and evicted state is for the VM page tables, e.g. if the 
whole VM is currently not used and swapped out or even de-allocated.

This is necessary because we have cases where we need to access the VM data 
without holding the dma-resv lock of this VM. Especially figuring out which 
parts of an address space contain mappings and which doesn't.


I think this is fine, this has nothing to do with lists of evicted GEM objects 
or external GEM
objects, right? Marking mappings (drm_gpuva) as invalidated 
(DRM_GPUVA_INVALIDATED) or accessing
the VA space does not require any dma-resv locks.



This is a requirement which comes with HMM handling, you won't see this with 
Vulkan (or OpenGL, VAAPI etc..).


The invalidation lock on the other hand is what in this discussion is called 
eviction lock. This one is needed because what I wrote above, during the move 
callback only the dma-resv of the BO which is moved is locked, but not 
necessarily the dma-resv of the VM.


That's yet another thing, right? This is used to track whether *any* BO that 
belongs to the VM is
currently being evicted, correct? As mentioned, as by now this is not supported 
in GPUVM and hence
would be the same driver specific code with the same driver specifc lock.



Regards,
Christian.





Regards,
Christian.



On Wed, Sep 13, 2023 at 11:14:46AM +0200, Thomas Hellström wrote:

Hi!

On Wed, 2023-09-13 at 01:36 +0200, Danilo Krummrich wrote:

On Tue, Sep 12, 2023 at 09:23:08PM +0200, Thomas Hellström wrote:

On 9/12/23 18:50, Danilo Krummrich wrote:

On Tue, Sep 12, 2023 at 06:20:32PM +0200, Thomas Hellström wrote:

Hi, Danilo,

On 9/9/23 17:31, Danilo Krummrich wrote:

So far the DRM GPUVA manager offers common infrastructure to
track GPU VA
allocations and mappings, generically connect GPU VA mappings
to their
backing buffers and perform more complex mapping operations
on the GPU VA
space.

However, there are more design patterns commonly used by
drivers, which
can potentially be generalized in order to make the DRM GPUVA
manager
represent a basic GPU-VM implementation. In this context,
this patch aims
at generalizing the following elements.

1) Provide a common dma-resv for GEM objects not being used
outside of
  this GPU-VM.

2) Provide tracking of external GEM objects (GEM objects
which are
  shared with other GPU-VMs).

3) Provide functions to efficiently lock all GEM objects dma-
resv the
  GPU-VM contains mappings of.

4) Provide tracking of evicted GEM objects the GPU-VM
contains mappings
  of, such that validation of evicted GEM objects is
accelerated.

5) Provide some convinience functions for common patterns.

Rather than being designed as a "framework", the target is to
make all
features appear as a collection of optional helper functions,
such that
drivers are free 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-13 Thread Christian König

Am 13.09.23 um 17:15 schrieb Danilo Krummrich:

On 9/13/23 16:26, Christian König wrote:

Am 13.09.23 um 14:16 schrieb Danilo Krummrich:
As mentioned in a different mail thread, the reply is based on the 
assumption

that we don't support anything else than GPUVM updates from the IOCTL.


I think that this assumption is incorrect.


Well, more precisely I should have said "don't support GPUVM updated 
from within
fence signaling critical sections". And looking at the code, that 
doesn't seem what

you're doing there.



Vulkan is just once specific use case, but this here should probably 
be able to handle other use cases as well.


Especially with HMM you get the requirement that you need to be able 
to invalidate GPUVM mappings without grabbing a reservation lock.


What do you mean with "invalidate GPUVM mappings" in this context? 
drm_gpuvm_bo_evict()
should only be called from a ttm_device_funcs::move callback, we 
should hold the dma-resv

lock there.


Well the question is which dma-resv lock do we hold?

In the move callback we only hold the dma-resv lock of the BO which is 
moved, but when that is a shared BO then that's not the same as the one 
for the VM.






See what the eviction lock in amdgpu is doing for example.


The eviction_lock seems to protect a VM state "evicting" of whether 
any BO that
is associated with the VM is currently evicting. At the same time 
amdgpu protects
the eviceted list of the VM with a different lock. So this seems to be 
entirely

unrelated. Tracking a "currently evicting" state is not part of the GPUVM
implementation currently and hence nothing would change for amdgpu there.


Sorry for the confusion we use different terminology in amdgpu.

The eviction lock and evicted state is for the VM page tables, e.g. if 
the whole VM is currently not used and swapped out or even de-allocated.


This is necessary because we have cases where we need to access the VM 
data without holding the dma-resv lock of this VM. Especially figuring 
out which parts of an address space contain mappings and which doesn't.


This is a requirement which comes with HMM handling, you won't see this 
with Vulkan (or OpenGL, VAAPI etc..).



The invalidation lock on the other hand is what in this discussion is 
called eviction lock. This one is needed because what I wrote above, 
during the move callback only the dma-resv of the BO which is moved is 
locked, but not necessarily the dma-resv of the VM.


Regards,
Christian.





Regards,
Christian.



On Wed, Sep 13, 2023 at 11:14:46AM +0200, Thomas Hellström wrote:

Hi!

On Wed, 2023-09-13 at 01:36 +0200, Danilo Krummrich wrote:

On Tue, Sep 12, 2023 at 09:23:08PM +0200, Thomas Hellström wrote:

On 9/12/23 18:50, Danilo Krummrich wrote:

On Tue, Sep 12, 2023 at 06:20:32PM +0200, Thomas Hellström wrote:

Hi, Danilo,

On 9/9/23 17:31, Danilo Krummrich wrote:

So far the DRM GPUVA manager offers common infrastructure to
track GPU VA
allocations and mappings, generically connect GPU VA mappings
to their
backing buffers and perform more complex mapping operations
on the GPU VA
space.

However, there are more design patterns commonly used by
drivers, which
can potentially be generalized in order to make the DRM GPUVA
manager
represent a basic GPU-VM implementation. In this context,
this patch aims
at generalizing the following elements.

1) Provide a common dma-resv for GEM objects not being used
outside of
  this GPU-VM.

2) Provide tracking of external GEM objects (GEM objects
which are
  shared with other GPU-VMs).

3) Provide functions to efficiently lock all GEM objects dma-
resv the
  GPU-VM contains mappings of.

4) Provide tracking of evicted GEM objects the GPU-VM
contains mappings
  of, such that validation of evicted GEM objects is
accelerated.

5) Provide some convinience functions for common patterns.

Rather than being designed as a "framework", the target is to
make all
features appear as a collection of optional helper functions,
such that
drivers are free to make use of the DRM GPUVA managers basic
functionality and opt-in for other features without setting
any feature
flags, just by making use of the corresponding functions.

Big kudos to Boris Brezillon for his help to figure out
locking for drivers
updating the GPU VA space within the fence signalling path.

Suggested-by: Matthew Brost 
Signed-off-by: Danilo Krummrich 
---
    drivers/gpu/drm/drm_gpuvm.c | 516

    include/drm/drm_gpuvm.h | 197 ++
    2 files changed, 713 insertions(+)

diff --git a/drivers/gpu/drm/drm_gpuvm.c
b/drivers/gpu/drm/drm_gpuvm.c
index f4411047dbb3..8e62a043f719 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -73,6 +73,21 @@
 * _gem_object list of _gpuvm_bos for an existing
instance of this
 * particular combination. If not existent a new instance
is created and linked
 * to the _gem_object.
+ *
+ * _gpuvm_bo structures, since unique for a 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-13 Thread Christian König

Am 13.09.23 um 17:13 schrieb Thomas Hellström:

Hi Christian

On 9/13/23 16:26, Christian König wrote:

Am 13.09.23 um 14:16 schrieb Danilo Krummrich:
As mentioned in a different mail thread, the reply is based on the 
assumption

that we don't support anything else than GPUVM updates from the IOCTL.


I think that this assumption is incorrect.

Vulkan is just once specific use case, but this here should probably 
be able to handle other use cases as well.


Especially with HMM you get the requirement that you need to be able 
to invalidate GPUVM mappings without grabbing a reservation lock.


Are you referring to the MMU range invalidation notifiers here?


Yes, but you need to ping Felix and Philip for the details.





See what the eviction lock in amdgpu is doing for example.


IMO the statement regarding GPUVM updates from the IOCTL mostly refers 
to the need to protect the evicted- and extobj lists with additional 
spinlocks. Supporting userptr and faulting will ofc require additional 
locks / locking mechanisms. But this code doesn't do that yet. Is your 
concern that these particular spinlocks for these lists are indeed 
needed?


More or less yes. My main concern is that both Dave and Danilo mentioned 
that they work with the assumption that they only need to handle 
Vulkan/IOCTL based use cases.


Regards,
Christian.



/Thomas




Regards,
Christian.



On Wed, Sep 13, 2023 at 11:14:46AM +0200, Thomas Hellström wrote:

Hi!

On Wed, 2023-09-13 at 01:36 +0200, Danilo Krummrich wrote:

On Tue, Sep 12, 2023 at 09:23:08PM +0200, Thomas Hellström wrote:

On 9/12/23 18:50, Danilo Krummrich wrote:

On Tue, Sep 12, 2023 at 06:20:32PM +0200, Thomas Hellström wrote:

Hi, Danilo,

On 9/9/23 17:31, Danilo Krummrich wrote:

So far the DRM GPUVA manager offers common infrastructure to
track GPU VA
allocations and mappings, generically connect GPU VA mappings
to their
backing buffers and perform more complex mapping operations
on the GPU VA
space.

However, there are more design patterns commonly used by
drivers, which
can potentially be generalized in order to make the DRM GPUVA
manager
represent a basic GPU-VM implementation. In this context,
this patch aims
at generalizing the following elements.

1) Provide a common dma-resv for GEM objects not being used
outside of
  this GPU-VM.

2) Provide tracking of external GEM objects (GEM objects
which are
  shared with other GPU-VMs).

3) Provide functions to efficiently lock all GEM objects dma-
resv the
  GPU-VM contains mappings of.

4) Provide tracking of evicted GEM objects the GPU-VM
contains mappings
  of, such that validation of evicted GEM objects is
accelerated.

5) Provide some convinience functions for common patterns.

Rather than being designed as a "framework", the target is to
make all
features appear as a collection of optional helper functions,
such that
drivers are free to make use of the DRM GPUVA managers basic
functionality and opt-in for other features without setting
any feature
flags, just by making use of the corresponding functions.

Big kudos to Boris Brezillon for his help to figure out
locking for drivers
updating the GPU VA space within the fence signalling path.

Suggested-by: Matthew Brost 
Signed-off-by: Danilo Krummrich 
---
    drivers/gpu/drm/drm_gpuvm.c | 516

    include/drm/drm_gpuvm.h | 197 ++
    2 files changed, 713 insertions(+)

diff --git a/drivers/gpu/drm/drm_gpuvm.c
b/drivers/gpu/drm/drm_gpuvm.c
index f4411047dbb3..8e62a043f719 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -73,6 +73,21 @@
 * _gem_object list of _gpuvm_bos for an existing
instance of this
 * particular combination. If not existent a new instance
is created and linked
 * to the _gem_object.
+ *
+ * _gpuvm_bo structures, since unique for a given
_gpuvm, are also used
+ * as entry for the _gpuvm's lists of external and
evicted objects. Those
+ * list are maintained in order to accelerate locking of
dma-resv locks and
+ * validation of evicted objects bound in a _gpuvm. For
instance the all
+ * _gem_object's _resv of a given _gpuvm can be
locked by calling
+ * drm_gpuvm_exec_lock(). Once locked drivers can call
drm_gpuvm_validate() in
+ * order to validate all evicted _gem_objects. It is
also possible to lock
+ * additional _gem_objects by providing the
corresponding parameters to
+ * drm_gpuvm_exec_lock() as well as open code the _exec
loop while making
+ * use of helper functions such as drm_gpuvm_prepare_range()
or
+ * drm_gpuvm_prepare_objects().
+ *
+ * Every bound _gem_object is treated as external object
when its _resv
+ * structure is different than the _gpuvm's common
_resv structure.
 */
    /**
@@ -420,6 +435,20 @@
 * Subsequent calls to drm_gpuvm_bo_obtain() for the same
_gpuvm and
 * _gem_object must be able to observe previous
creations and destructions
 * of _gpuvm_bos in order to keep instances unique.
+ *
+ * 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-13 Thread Thomas Hellström

Hi Christian

On 9/13/23 16:26, Christian König wrote:

Am 13.09.23 um 14:16 schrieb Danilo Krummrich:
As mentioned in a different mail thread, the reply is based on the 
assumption

that we don't support anything else than GPUVM updates from the IOCTL.


I think that this assumption is incorrect.

Vulkan is just once specific use case, but this here should probably 
be able to handle other use cases as well.


Especially with HMM you get the requirement that you need to be able 
to invalidate GPUVM mappings without grabbing a reservation lock.


Are you referring to the MMU range invalidation notifiers here?



See what the eviction lock in amdgpu is doing for example.


IMO the statement regarding GPUVM updates from the IOCTL mostly refers 
to the need to protect the evicted- and extobj lists with additional 
spinlocks. Supporting userptr and faulting will ofc require additional 
locks / locking mechanisms. But this code doesn't do that yet. Is your 
concern that these particular spinlocks for these lists are indeed needed?


/Thomas




Regards,
Christian.



On Wed, Sep 13, 2023 at 11:14:46AM +0200, Thomas Hellström wrote:

Hi!

On Wed, 2023-09-13 at 01:36 +0200, Danilo Krummrich wrote:

On Tue, Sep 12, 2023 at 09:23:08PM +0200, Thomas Hellström wrote:

On 9/12/23 18:50, Danilo Krummrich wrote:

On Tue, Sep 12, 2023 at 06:20:32PM +0200, Thomas Hellström wrote:

Hi, Danilo,

On 9/9/23 17:31, Danilo Krummrich wrote:

So far the DRM GPUVA manager offers common infrastructure to
track GPU VA
allocations and mappings, generically connect GPU VA mappings
to their
backing buffers and perform more complex mapping operations
on the GPU VA
space.

However, there are more design patterns commonly used by
drivers, which
can potentially be generalized in order to make the DRM GPUVA
manager
represent a basic GPU-VM implementation. In this context,
this patch aims
at generalizing the following elements.

1) Provide a common dma-resv for GEM objects not being used
outside of
  this GPU-VM.

2) Provide tracking of external GEM objects (GEM objects
which are
  shared with other GPU-VMs).

3) Provide functions to efficiently lock all GEM objects dma-
resv the
  GPU-VM contains mappings of.

4) Provide tracking of evicted GEM objects the GPU-VM
contains mappings
  of, such that validation of evicted GEM objects is
accelerated.

5) Provide some convinience functions for common patterns.

Rather than being designed as a "framework", the target is to
make all
features appear as a collection of optional helper functions,
such that
drivers are free to make use of the DRM GPUVA managers basic
functionality and opt-in for other features without setting
any feature
flags, just by making use of the corresponding functions.

Big kudos to Boris Brezillon for his help to figure out
locking for drivers
updating the GPU VA space within the fence signalling path.

Suggested-by: Matthew Brost 
Signed-off-by: Danilo Krummrich 
---
    drivers/gpu/drm/drm_gpuvm.c | 516

    include/drm/drm_gpuvm.h | 197 ++
    2 files changed, 713 insertions(+)

diff --git a/drivers/gpu/drm/drm_gpuvm.c
b/drivers/gpu/drm/drm_gpuvm.c
index f4411047dbb3..8e62a043f719 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -73,6 +73,21 @@
 * _gem_object list of _gpuvm_bos for an existing
instance of this
 * particular combination. If not existent a new instance
is created and linked
 * to the _gem_object.
+ *
+ * _gpuvm_bo structures, since unique for a given
_gpuvm, are also used
+ * as entry for the _gpuvm's lists of external and
evicted objects. Those
+ * list are maintained in order to accelerate locking of
dma-resv locks and
+ * validation of evicted objects bound in a _gpuvm. For
instance the all
+ * _gem_object's _resv of a given _gpuvm can be
locked by calling
+ * drm_gpuvm_exec_lock(). Once locked drivers can call
drm_gpuvm_validate() in
+ * order to validate all evicted _gem_objects. It is
also possible to lock
+ * additional _gem_objects by providing the
corresponding parameters to
+ * drm_gpuvm_exec_lock() as well as open code the _exec
loop while making
+ * use of helper functions such as drm_gpuvm_prepare_range()
or
+ * drm_gpuvm_prepare_objects().
+ *
+ * Every bound _gem_object is treated as external object
when its _resv
+ * structure is different than the _gpuvm's common
_resv structure.
 */
    /**
@@ -420,6 +435,20 @@
 * Subsequent calls to drm_gpuvm_bo_obtain() for the same
_gpuvm and
 * _gem_object must be able to observe previous
creations and destructions
 * of _gpuvm_bos in order to keep instances unique.
+ *
+ * The _gpuvm's lists for keeping track of external and
evicted objects are
+ * protected against concurrent insertion / removal and
iteration internally.
+ *
+ * However, drivers still need ensure to protect concurrent
calls to functions
+ * iterating those lists, such as drm_gpuvm_validate() and
+ * 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-13 Thread Danilo Krummrich

On 9/13/23 16:26, Christian König wrote:

Am 13.09.23 um 14:16 schrieb Danilo Krummrich:

As mentioned in a different mail thread, the reply is based on the assumption
that we don't support anything else than GPUVM updates from the IOCTL.


I think that this assumption is incorrect.


Well, more precisely I should have said "don't support GPUVM updated from within
fence signaling critical sections". And looking at the code, that doesn't seem 
what
you're doing there.



Vulkan is just once specific use case, but this here should probably be able to 
handle other use cases as well.

Especially with HMM you get the requirement that you need to be able to 
invalidate GPUVM mappings without grabbing a reservation lock.


What do you mean with "invalidate GPUVM mappings" in this context? 
drm_gpuvm_bo_evict()
should only be called from a ttm_device_funcs::move callback, we should hold 
the dma-resv
lock there.



See what the eviction lock in amdgpu is doing for example.


The eviction_lock seems to protect a VM state "evicting" of whether any BO that
is associated with the VM is currently evicting. At the same time amdgpu 
protects
the eviceted list of the VM with a different lock. So this seems to be entirely
unrelated. Tracking a "currently evicting" state is not part of the GPUVM
implementation currently and hence nothing would change for amdgpu there.



Regards,
Christian.



On Wed, Sep 13, 2023 at 11:14:46AM +0200, Thomas Hellström wrote:

Hi!

On Wed, 2023-09-13 at 01:36 +0200, Danilo Krummrich wrote:

On Tue, Sep 12, 2023 at 09:23:08PM +0200, Thomas Hellström wrote:

On 9/12/23 18:50, Danilo Krummrich wrote:

On Tue, Sep 12, 2023 at 06:20:32PM +0200, Thomas Hellström wrote:

Hi, Danilo,

On 9/9/23 17:31, Danilo Krummrich wrote:

So far the DRM GPUVA manager offers common infrastructure to
track GPU VA
allocations and mappings, generically connect GPU VA mappings
to their
backing buffers and perform more complex mapping operations
on the GPU VA
space.

However, there are more design patterns commonly used by
drivers, which
can potentially be generalized in order to make the DRM GPUVA
manager
represent a basic GPU-VM implementation. In this context,
this patch aims
at generalizing the following elements.

1) Provide a common dma-resv for GEM objects not being used
outside of
  this GPU-VM.

2) Provide tracking of external GEM objects (GEM objects
which are
  shared with other GPU-VMs).

3) Provide functions to efficiently lock all GEM objects dma-
resv the
  GPU-VM contains mappings of.

4) Provide tracking of evicted GEM objects the GPU-VM
contains mappings
  of, such that validation of evicted GEM objects is
accelerated.

5) Provide some convinience functions for common patterns.

Rather than being designed as a "framework", the target is to
make all
features appear as a collection of optional helper functions,
such that
drivers are free to make use of the DRM GPUVA managers basic
functionality and opt-in for other features without setting
any feature
flags, just by making use of the corresponding functions.

Big kudos to Boris Brezillon for his help to figure out
locking for drivers
updating the GPU VA space within the fence signalling path.

Suggested-by: Matthew Brost 
Signed-off-by: Danilo Krummrich 
---
    drivers/gpu/drm/drm_gpuvm.c | 516

    include/drm/drm_gpuvm.h | 197 ++
    2 files changed, 713 insertions(+)

diff --git a/drivers/gpu/drm/drm_gpuvm.c
b/drivers/gpu/drm/drm_gpuvm.c
index f4411047dbb3..8e62a043f719 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -73,6 +73,21 @@
 * _gem_object list of _gpuvm_bos for an existing
instance of this
 * particular combination. If not existent a new instance
is created and linked
 * to the _gem_object.
+ *
+ * _gpuvm_bo structures, since unique for a given
_gpuvm, are also used
+ * as entry for the _gpuvm's lists of external and
evicted objects. Those
+ * list are maintained in order to accelerate locking of
dma-resv locks and
+ * validation of evicted objects bound in a _gpuvm. For
instance the all
+ * _gem_object's _resv of a given _gpuvm can be
locked by calling
+ * drm_gpuvm_exec_lock(). Once locked drivers can call
drm_gpuvm_validate() in
+ * order to validate all evicted _gem_objects. It is
also possible to lock
+ * additional _gem_objects by providing the
corresponding parameters to
+ * drm_gpuvm_exec_lock() as well as open code the _exec
loop while making
+ * use of helper functions such as drm_gpuvm_prepare_range()
or
+ * drm_gpuvm_prepare_objects().
+ *
+ * Every bound _gem_object is treated as external object
when its _resv
+ * structure is different than the _gpuvm's common
_resv structure.
 */
    /**
@@ -420,6 +435,20 @@
 * Subsequent calls to drm_gpuvm_bo_obtain() for the same
_gpuvm and
 * _gem_object must be able to observe previous
creations and destructions
 * of _gpuvm_bos in order to keep instances 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-13 Thread Thomas Hellström



On 9/13/23 16:01, Boris Brezillon wrote:

On Wed, 13 Sep 2023 15:22:56 +0200
Thomas Hellström  wrote:


On 9/13/23 13:33, Boris Brezillon wrote:

On Wed, 13 Sep 2023 12:39:01 +0200
Thomas Hellström  wrote:
  

Hi,

On 9/13/23 09:19, Boris Brezillon wrote:

On Wed, 13 Sep 2023 17:05:42 +1000
Dave Airlie  wrote:
 

On Wed, 13 Sept 2023 at 17:03, Boris Brezillon
 wrote:

On Tue, 12 Sep 2023 18:20:32 +0200
Thomas Hellström  wrote:


+/**
+ * get_next_vm_bo_from_list() - get the next vm_bo element
+ * @__gpuvm: The GPU VM
+ * @__list_name: The name of the list we're iterating on
+ * @__local_list: A pointer to the local list used to store already iterated 
items
+ * @__prev_vm_bo: The previous element we got from 
drm_gpuvm_get_next_cached_vm_bo()
+ *
+ * This helper is here to provide lockless list iteration. Lockless as in, the
+ * iterator releases the lock immediately after picking the first element from
+ * the list, so list insertion deletion can happen concurrently.

Are the list spinlocks needed for that async state update from within
the dma-fence critical section we've discussed previously?

Any driver calling _[un]link() from its drm_gpu_scheduler::run_job()
hook will be in this situation (Panthor at the moment, PowerVR soon). I
get that Xe and Nouveau don't need that because they update the VM
state early (in the ioctl path), but I keep thinking this will hurt us
if we don't think it through from the beginning, because once you've
set this logic to depend only on resv locks, it will be pretty hard to
get back to a solution which lets synchronous VM_BINDs take precedence
on asynchronous request, and, with vkQueueBindSparse() passing external
deps (plus the fact the VM_BIND queue might be pretty deep), it can
take a long time to get your synchronous VM_BIND executed...

So this would boil down to either (possibly opt-in) keeping the spinlock
approach or pushing the unlink out to a wq then?

Deferred _unlink() would not be an issue, since I already defer the
drm_gpuva destruction to a wq, it would just a be a matter of moving the
_unlink() call there as well. But _link() also takes the GEM gpuva list
lock, and that one is bit tricky, in that sm_map() can trigger 2 more
_link() calls for the prev/next mappings, which we can't guess until we
get to execute the VM update. If we mandate the use of the GEM resv
lock, that simply means async VM updates (AKA calling
drm_gpuvm_sm_[un]map()) are not an option. And if this is what everyone
agrees on, then I'd like the APIs that make this sort of async VM
update possible (drm_gpuvm_sm_[un]map(), the drm_gpuvm_ops::sm_step*
methods, and probably other things) to be dropped, so we don't make it
look like it's something we support.
  

BTW, as also asked in a reply to Danilo, how do you call unlink from
run_job() when it was requiring the obj->dma_resv lock, or was that a WIP?

_unlink() makes sure the GEM gpuva list lock is taken, but this can be
a custom lock (see drm_gem_gpuva_set_lock()). In panthor we have
panthor_gem_object::gpuva_list_lock that's dedicated the gpuva list
protection. We make sure we never take this lock while allocating
memory to guarantee the dma-signalling path can't deadlock.
  


btw what is the use case for this? do we have actual vulkan
applications we know will have problems here?

I don't, but I think that's a concern Faith raised at some point (dates
back from when I was reading threads describing how VM_BIND on i915
should work, and I was clearly discovering this whole VM_BIND thing at
that time, so maybe I misunderstood).
 

it feels like a bit of premature optimisation, but maybe we have use cases.

Might be, but that's the sort of thing that would put us in a corner if
we don't have a plan for when the needs arise. Besides, if we don't
want to support that case because it's too complicated, I'd recommend
dropping all the drm_gpuvm APIs that let people think this mode is
valid/supported (map/remap/unmap hooks in drm_gpuvm_ops,
drm_gpuvm_sm_[un]map helpers, etc). Keeping them around just adds to the
confusion.

Xe allows bypassing the bind-queue with another bind-queue, but to
completely avoid dependencies between queues the Operations may not
overlap.

So, you check the VM state with some VM lock held (would be the VM resv
in my case), and if the mapping is new (no overlaps with pre-existing
mappings), you queue it to the fast-track/sync-VM_BIND queue. What would
be missing I guess is a way to know if the mapping is active (MMU has
been updated) or pending (MMU update queued to the bind-queue), so I can
fast-track mapping/unmapping of active mappings. This would leave
overlapping sync/async VM updates, which can't happen in practice
unless userspace is doing something wrong (sparse bindings always go
through vkQueueBindSparse).

User-space is allowed to create new bind queues at will, and they
execute independently save for range overlaps.

I've limited panthor to just one bind-queue that's automatically

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-13 Thread Christian König

Am 13.09.23 um 14:16 schrieb Danilo Krummrich:

As mentioned in a different mail thread, the reply is based on the assumption
that we don't support anything else than GPUVM updates from the IOCTL.


I think that this assumption is incorrect.

Vulkan is just once specific use case, but this here should probably be 
able to handle other use cases as well.


Especially with HMM you get the requirement that you need to be able to 
invalidate GPUVM mappings without grabbing a reservation lock.


See what the eviction lock in amdgpu is doing for example.

Regards,
Christian.



On Wed, Sep 13, 2023 at 11:14:46AM +0200, Thomas Hellström wrote:

Hi!

On Wed, 2023-09-13 at 01:36 +0200, Danilo Krummrich wrote:

On Tue, Sep 12, 2023 at 09:23:08PM +0200, Thomas Hellström wrote:

On 9/12/23 18:50, Danilo Krummrich wrote:

On Tue, Sep 12, 2023 at 06:20:32PM +0200, Thomas Hellström wrote:

Hi, Danilo,

On 9/9/23 17:31, Danilo Krummrich wrote:

So far the DRM GPUVA manager offers common infrastructure to
track GPU VA
allocations and mappings, generically connect GPU VA mappings
to their
backing buffers and perform more complex mapping operations
on the GPU VA
space.

However, there are more design patterns commonly used by
drivers, which
can potentially be generalized in order to make the DRM GPUVA
manager
represent a basic GPU-VM implementation. In this context,
this patch aims
at generalizing the following elements.

1) Provide a common dma-resv for GEM objects not being used
outside of
  this GPU-VM.

2) Provide tracking of external GEM objects (GEM objects
which are
  shared with other GPU-VMs).

3) Provide functions to efficiently lock all GEM objects dma-
resv the
  GPU-VM contains mappings of.

4) Provide tracking of evicted GEM objects the GPU-VM
contains mappings
  of, such that validation of evicted GEM objects is
accelerated.

5) Provide some convinience functions for common patterns.

Rather than being designed as a "framework", the target is to
make all
features appear as a collection of optional helper functions,
such that
drivers are free to make use of the DRM GPUVA managers basic
functionality and opt-in for other features without setting
any feature
flags, just by making use of the corresponding functions.

Big kudos to Boris Brezillon for his help to figure out
locking for drivers
updating the GPU VA space within the fence signalling path.

Suggested-by: Matthew Brost 
Signed-off-by: Danilo Krummrich 
---
    drivers/gpu/drm/drm_gpuvm.c | 516

    include/drm/drm_gpuvm.h | 197 ++
    2 files changed, 713 insertions(+)

diff --git a/drivers/gpu/drm/drm_gpuvm.c
b/drivers/gpu/drm/drm_gpuvm.c
index f4411047dbb3..8e62a043f719 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -73,6 +73,21 @@
     * _gem_object list of _gpuvm_bos for an existing
instance of this
     * particular combination. If not existent a new instance
is created and linked
     * to the _gem_object.
+ *
+ * _gpuvm_bo structures, since unique for a given
_gpuvm, are also used
+ * as entry for the _gpuvm's lists of external and
evicted objects. Those
+ * list are maintained in order to accelerate locking of
dma-resv locks and
+ * validation of evicted objects bound in a _gpuvm. For
instance the all
+ * _gem_object's _resv of a given _gpuvm can be
locked by calling
+ * drm_gpuvm_exec_lock(). Once locked drivers can call
drm_gpuvm_validate() in
+ * order to validate all evicted _gem_objects. It is
also possible to lock
+ * additional _gem_objects by providing the
corresponding parameters to
+ * drm_gpuvm_exec_lock() as well as open code the _exec
loop while making
+ * use of helper functions such as drm_gpuvm_prepare_range()
or
+ * drm_gpuvm_prepare_objects().
+ *
+ * Every bound _gem_object is treated as external object
when its _resv
+ * structure is different than the _gpuvm's common
_resv structure.
     */
    /**
@@ -420,6 +435,20 @@
     * Subsequent calls to drm_gpuvm_bo_obtain() for the same
_gpuvm and
     * _gem_object must be able to observe previous
creations and destructions
     * of _gpuvm_bos in order to keep instances unique.
+ *
+ * The _gpuvm's lists for keeping track of external and
evicted objects are
+ * protected against concurrent insertion / removal and
iteration internally.
+ *
+ * However, drivers still need ensure to protect concurrent
calls to functions
+ * iterating those lists, such as drm_gpuvm_validate() and
+ * drm_gpuvm_prepare_objects(). Every such function contains
a particular
+ * comment and lockdep checks if possible.
+ *
+ * Functions adding or removing entries from those lists,
such as
+ * drm_gpuvm_bo_evict() or drm_gpuvm_bo_extobj_add() may be
called with external
+ * locks being held, e.g. in order to avoid the
corresponding list to be
+ * (safely) modified while potentially being iternated by
other API functions.
+ * However, this is entirely optional.
     */
    /**
@@ -632,6 +661,131 @@
     *   }

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-13 Thread Thomas Hellström



On 9/13/23 13:33, Boris Brezillon wrote:

On Wed, 13 Sep 2023 12:39:01 +0200
Thomas Hellström  wrote:


Hi,

On 9/13/23 09:19, Boris Brezillon wrote:

On Wed, 13 Sep 2023 17:05:42 +1000
Dave Airlie  wrote:
  

On Wed, 13 Sept 2023 at 17:03, Boris Brezillon
 wrote:

On Tue, 12 Sep 2023 18:20:32 +0200
Thomas Hellström  wrote:
 

+/**
+ * get_next_vm_bo_from_list() - get the next vm_bo element
+ * @__gpuvm: The GPU VM
+ * @__list_name: The name of the list we're iterating on
+ * @__local_list: A pointer to the local list used to store already iterated 
items
+ * @__prev_vm_bo: The previous element we got from 
drm_gpuvm_get_next_cached_vm_bo()
+ *
+ * This helper is here to provide lockless list iteration. Lockless as in, the
+ * iterator releases the lock immediately after picking the first element from
+ * the list, so list insertion deletion can happen concurrently.

Are the list spinlocks needed for that async state update from within
the dma-fence critical section we've discussed previously?

Any driver calling _[un]link() from its drm_gpu_scheduler::run_job()
hook will be in this situation (Panthor at the moment, PowerVR soon). I
get that Xe and Nouveau don't need that because they update the VM
state early (in the ioctl path), but I keep thinking this will hurt us
if we don't think it through from the beginning, because once you've
set this logic to depend only on resv locks, it will be pretty hard to
get back to a solution which lets synchronous VM_BINDs take precedence
on asynchronous request, and, with vkQueueBindSparse() passing external
deps (plus the fact the VM_BIND queue might be pretty deep), it can
take a long time to get your synchronous VM_BIND executed...

So this would boil down to either (possibly opt-in) keeping the spinlock
approach or pushing the unlink out to a wq then?

Deferred _unlink() would not be an issue, since I already defer the
drm_gpuva destruction to a wq, it would just a be a matter of moving the
_unlink() call there as well. But _link() also takes the GEM gpuva list
lock, and that one is bit tricky, in that sm_map() can trigger 2 more
_link() calls for the prev/next mappings, which we can't guess until we
get to execute the VM update. If we mandate the use of the GEM resv
lock, that simply means async VM updates (AKA calling
drm_gpuvm_sm_[un]map()) are not an option. And if this is what everyone
agrees on, then I'd like the APIs that make this sort of async VM
update possible (drm_gpuvm_sm_[un]map(), the drm_gpuvm_ops::sm_step*
methods, and probably other things) to be dropped, so we don't make it
look like it's something we support.


BTW, as also asked in a reply to Danilo, how do you call unlink from
run_job() when it was requiring the obj->dma_resv lock, or was that a WIP?

_unlink() makes sure the GEM gpuva list lock is taken, but this can be
a custom lock (see drm_gem_gpuva_set_lock()). In panthor we have
panthor_gem_object::gpuva_list_lock that's dedicated the gpuva list
protection. We make sure we never take this lock while allocating
memory to guarantee the dma-signalling path can't deadlock.

 

btw what is the use case for this? do we have actual vulkan
applications we know will have problems here?

I don't, but I think that's a concern Faith raised at some point (dates
back from when I was reading threads describing how VM_BIND on i915
should work, and I was clearly discovering this whole VM_BIND thing at
that time, so maybe I misunderstood).
  

it feels like a bit of premature optimisation, but maybe we have use cases.

Might be, but that's the sort of thing that would put us in a corner if
we don't have a plan for when the needs arise. Besides, if we don't
want to support that case because it's too complicated, I'd recommend
dropping all the drm_gpuvm APIs that let people think this mode is
valid/supported (map/remap/unmap hooks in drm_gpuvm_ops,
drm_gpuvm_sm_[un]map helpers, etc). Keeping them around just adds to the
confusion.

Xe allows bypassing the bind-queue with another bind-queue, but to
completely avoid dependencies between queues the Operations may not
overlap.

So, you check the VM state with some VM lock held (would be the VM resv
in my case), and if the mapping is new (no overlaps with pre-existing
mappings), you queue it to the fast-track/sync-VM_BIND queue. What would
be missing I guess is a way to know if the mapping is active (MMU has
been updated) or pending (MMU update queued to the bind-queue), so I can
fast-track mapping/unmapping of active mappings. This would leave
overlapping sync/async VM updates, which can't happen in practice
unless userspace is doing something wrong (sparse bindings always go
through vkQueueBindSparse).


User-space is allowed to create new bind queues at will, and they 
execute independently save for range overlaps.


And the overlapping granularity depends very much on the detail of the 
range tracking.

We drafted this fenced range utility


Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-13 Thread Danilo Krummrich
As mentioned in a different mail thread, the reply is based on the assumption
that we don't support anything else than GPUVM updates from the IOCTL.

On Wed, Sep 13, 2023 at 11:14:46AM +0200, Thomas Hellström wrote:
> Hi!
> 
> On Wed, 2023-09-13 at 01:36 +0200, Danilo Krummrich wrote:
> > On Tue, Sep 12, 2023 at 09:23:08PM +0200, Thomas Hellström wrote:
> > > 
> > > On 9/12/23 18:50, Danilo Krummrich wrote:
> > > > On Tue, Sep 12, 2023 at 06:20:32PM +0200, Thomas Hellström wrote:
> > > > > Hi, Danilo,
> > > > > 
> > > > > On 9/9/23 17:31, Danilo Krummrich wrote:
> > > > > > So far the DRM GPUVA manager offers common infrastructure to
> > > > > > track GPU VA
> > > > > > allocations and mappings, generically connect GPU VA mappings
> > > > > > to their
> > > > > > backing buffers and perform more complex mapping operations
> > > > > > on the GPU VA
> > > > > > space.
> > > > > > 
> > > > > > However, there are more design patterns commonly used by
> > > > > > drivers, which
> > > > > > can potentially be generalized in order to make the DRM GPUVA
> > > > > > manager
> > > > > > represent a basic GPU-VM implementation. In this context,
> > > > > > this patch aims
> > > > > > at generalizing the following elements.
> > > > > > 
> > > > > > 1) Provide a common dma-resv for GEM objects not being used
> > > > > > outside of
> > > > > >  this GPU-VM.
> > > > > > 
> > > > > > 2) Provide tracking of external GEM objects (GEM objects
> > > > > > which are
> > > > > >  shared with other GPU-VMs).
> > > > > > 
> > > > > > 3) Provide functions to efficiently lock all GEM objects dma-
> > > > > > resv the
> > > > > >  GPU-VM contains mappings of.
> > > > > > 
> > > > > > 4) Provide tracking of evicted GEM objects the GPU-VM
> > > > > > contains mappings
> > > > > >  of, such that validation of evicted GEM objects is
> > > > > > accelerated.
> > > > > > 
> > > > > > 5) Provide some convinience functions for common patterns.
> > > > > > 
> > > > > > Rather than being designed as a "framework", the target is to
> > > > > > make all
> > > > > > features appear as a collection of optional helper functions,
> > > > > > such that
> > > > > > drivers are free to make use of the DRM GPUVA managers basic
> > > > > > functionality and opt-in for other features without setting
> > > > > > any feature
> > > > > > flags, just by making use of the corresponding functions.
> > > > > > 
> > > > > > Big kudos to Boris Brezillon for his help to figure out
> > > > > > locking for drivers
> > > > > > updating the GPU VA space within the fence signalling path.
> > > > > > 
> > > > > > Suggested-by: Matthew Brost 
> > > > > > Signed-off-by: Danilo Krummrich 
> > > > > > ---
> > > > > >    drivers/gpu/drm/drm_gpuvm.c | 516
> > > > > > 
> > > > > >    include/drm/drm_gpuvm.h | 197 ++
> > > > > >    2 files changed, 713 insertions(+)
> > > > > > 
> > > > > > diff --git a/drivers/gpu/drm/drm_gpuvm.c
> > > > > > b/drivers/gpu/drm/drm_gpuvm.c
> > > > > > index f4411047dbb3..8e62a043f719 100644
> > > > > > --- a/drivers/gpu/drm/drm_gpuvm.c
> > > > > > +++ b/drivers/gpu/drm/drm_gpuvm.c
> > > > > > @@ -73,6 +73,21 @@
> > > > > >     * _gem_object list of _gpuvm_bos for an existing
> > > > > > instance of this
> > > > > >     * particular combination. If not existent a new instance
> > > > > > is created and linked
> > > > > >     * to the _gem_object.
> > > > > > + *
> > > > > > + * _gpuvm_bo structures, since unique for a given
> > > > > > _gpuvm, are also used
> > > > > > + * as entry for the _gpuvm's lists of external and
> > > > > > evicted objects. Those
> > > > > > + * list are maintained in order to accelerate locking of
> > > > > > dma-resv locks and
> > > > > > + * validation of evicted objects bound in a _gpuvm. For
> > > > > > instance the all
> > > > > > + * _gem_object's _resv of a given _gpuvm can be
> > > > > > locked by calling
> > > > > > + * drm_gpuvm_exec_lock(). Once locked drivers can call
> > > > > > drm_gpuvm_validate() in
> > > > > > + * order to validate all evicted _gem_objects. It is
> > > > > > also possible to lock
> > > > > > + * additional _gem_objects by providing the
> > > > > > corresponding parameters to
> > > > > > + * drm_gpuvm_exec_lock() as well as open code the _exec
> > > > > > loop while making
> > > > > > + * use of helper functions such as drm_gpuvm_prepare_range()
> > > > > > or
> > > > > > + * drm_gpuvm_prepare_objects().
> > > > > > + *
> > > > > > + * Every bound _gem_object is treated as external object
> > > > > > when its _resv
> > > > > > + * structure is different than the _gpuvm's common
> > > > > > _resv structure.
> > > > > >     */
> > > > > >    /**
> > > > > > @@ -420,6 +435,20 @@
> > > > > >     * Subsequent calls to drm_gpuvm_bo_obtain() for the same
> > > > > > _gpuvm and
> > > > > >     * _gem_object must be able to observe previous
> > > > > > creations and destructions
> > > > > >     * of _gpuvm_bos in order to keep 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-13 Thread Danilo Krummrich

After some more discussion with Boris on IRC, he seems to be willing to drop 
GPUVM
updates from the async path. If everyone agrees I'm fine to go ahead and drop 
this
use case for GPUVM.

@Thomas: I will reply to your last mail only considering GPUVM updates from 
within
the IOCTL.

- Danilo

On 9/13/23 13:33, Boris Brezillon wrote:

On Wed, 13 Sep 2023 12:39:01 +0200
Thomas Hellström  wrote:


Hi,

On 9/13/23 09:19, Boris Brezillon wrote:

On Wed, 13 Sep 2023 17:05:42 +1000
Dave Airlie  wrote:
  

On Wed, 13 Sept 2023 at 17:03, Boris Brezillon
 wrote:

On Tue, 12 Sep 2023 18:20:32 +0200
Thomas Hellström  wrote:
 

+/**
+ * get_next_vm_bo_from_list() - get the next vm_bo element
+ * @__gpuvm: The GPU VM
+ * @__list_name: The name of the list we're iterating on
+ * @__local_list: A pointer to the local list used to store already iterated 
items
+ * @__prev_vm_bo: The previous element we got from 
drm_gpuvm_get_next_cached_vm_bo()
+ *
+ * This helper is here to provide lockless list iteration. Lockless as in, the
+ * iterator releases the lock immediately after picking the first element from
+ * the list, so list insertion deletion can happen concurrently.

Are the list spinlocks needed for that async state update from within
the dma-fence critical section we've discussed previously?

Any driver calling _[un]link() from its drm_gpu_scheduler::run_job()
hook will be in this situation (Panthor at the moment, PowerVR soon). I
get that Xe and Nouveau don't need that because they update the VM
state early (in the ioctl path), but I keep thinking this will hurt us
if we don't think it through from the beginning, because once you've
set this logic to depend only on resv locks, it will be pretty hard to
get back to a solution which lets synchronous VM_BINDs take precedence
on asynchronous request, and, with vkQueueBindSparse() passing external
deps (plus the fact the VM_BIND queue might be pretty deep), it can
take a long time to get your synchronous VM_BIND executed...


So this would boil down to either (possibly opt-in) keeping the spinlock
approach or pushing the unlink out to a wq then?


Deferred _unlink() would not be an issue, since I already defer the
drm_gpuva destruction to a wq, it would just a be a matter of moving the
_unlink() call there as well. But _link() also takes the GEM gpuva list
lock, and that one is bit tricky, in that sm_map() can trigger 2 more
_link() calls for the prev/next mappings, which we can't guess until we
get to execute the VM update. If we mandate the use of the GEM resv
lock, that simply means async VM updates (AKA calling
drm_gpuvm_sm_[un]map()) are not an option. And if this is what everyone
agrees on, then I'd like the APIs that make this sort of async VM
update possible (drm_gpuvm_sm_[un]map(), the drm_gpuvm_ops::sm_step*
methods, and probably other things) to be dropped, so we don't make it
look like it's something we support.


BTW, as also asked in a reply to Danilo, how do you call unlink from
run_job() when it was requiring the obj->dma_resv lock, or was that a WIP?


_unlink() makes sure the GEM gpuva list lock is taken, but this can be
a custom lock (see drm_gem_gpuva_set_lock()). In panthor we have
panthor_gem_object::gpuva_list_lock that's dedicated the gpuva list
protection. We make sure we never take this lock while allocating
memory to guarantee the dma-signalling path can't deadlock.



 

btw what is the use case for this? do we have actual vulkan
applications we know will have problems here?

I don't, but I think that's a concern Faith raised at some point (dates
back from when I was reading threads describing how VM_BIND on i915
should work, and I was clearly discovering this whole VM_BIND thing at
that time, so maybe I misunderstood).
  

it feels like a bit of premature optimisation, but maybe we have use cases.

Might be, but that's the sort of thing that would put us in a corner if
we don't have a plan for when the needs arise. Besides, if we don't
want to support that case because it's too complicated, I'd recommend
dropping all the drm_gpuvm APIs that let people think this mode is
valid/supported (map/remap/unmap hooks in drm_gpuvm_ops,
drm_gpuvm_sm_[un]map helpers, etc). Keeping them around just adds to the
confusion.


Xe allows bypassing the bind-queue with another bind-queue, but to
completely avoid dependencies between queues the Operations may not
overlap.


So, you check the VM state with some VM lock held (would be the VM resv
in my case), and if the mapping is new (no overlaps with pre-existing
mappings), you queue it to the fast-track/sync-VM_BIND queue. What would
be missing I guess is a way to know if the mapping is active (MMU has
been updated) or pending (MMU update queued to the bind-queue), so I can
fast-track mapping/unmapping of active mappings. This would leave
overlapping sync/async VM updates, which can't happen in practice
unless userspace is doing something wrong (sparse bindings always go
through vkQueueBindSparse).


Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-13 Thread Thomas Hellström

Hi,

On 9/13/23 09:19, Boris Brezillon wrote:

On Wed, 13 Sep 2023 17:05:42 +1000
Dave Airlie  wrote:


On Wed, 13 Sept 2023 at 17:03, Boris Brezillon
 wrote:

On Tue, 12 Sep 2023 18:20:32 +0200
Thomas Hellström  wrote:
  

+/**
+ * get_next_vm_bo_from_list() - get the next vm_bo element
+ * @__gpuvm: The GPU VM
+ * @__list_name: The name of the list we're iterating on
+ * @__local_list: A pointer to the local list used to store already iterated 
items
+ * @__prev_vm_bo: The previous element we got from 
drm_gpuvm_get_next_cached_vm_bo()
+ *
+ * This helper is here to provide lockless list iteration. Lockless as in, the
+ * iterator releases the lock immediately after picking the first element from
+ * the list, so list insertion deletion can happen concurrently.

Are the list spinlocks needed for that async state update from within
the dma-fence critical section we've discussed previously?

Any driver calling _[un]link() from its drm_gpu_scheduler::run_job()
hook will be in this situation (Panthor at the moment, PowerVR soon). I
get that Xe and Nouveau don't need that because they update the VM
state early (in the ioctl path), but I keep thinking this will hurt us
if we don't think it through from the beginning, because once you've
set this logic to depend only on resv locks, it will be pretty hard to
get back to a solution which lets synchronous VM_BINDs take precedence
on asynchronous request, and, with vkQueueBindSparse() passing external
deps (plus the fact the VM_BIND queue might be pretty deep), it can
take a long time to get your synchronous VM_BIND executed...


So this would boil down to either (possibly opt-in) keeping the spinlock 
approach or pushing the unlink out to a wq then?
BTW, as also asked in a reply to Danilo, how do you call unlink from 
run_job() when it was requiring the obj->dma_resv lock, or was that a WIP?


  

btw what is the use case for this? do we have actual vulkan
applications we know will have problems here?

I don't, but I think that's a concern Faith raised at some point (dates
back from when I was reading threads describing how VM_BIND on i915
should work, and I was clearly discovering this whole VM_BIND thing at
that time, so maybe I misunderstood).


it feels like a bit of premature optimisation, but maybe we have use cases.

Might be, but that's the sort of thing that would put us in a corner if
we don't have a plan for when the needs arise. Besides, if we don't
want to support that case because it's too complicated, I'd recommend
dropping all the drm_gpuvm APIs that let people think this mode is
valid/supported (map/remap/unmap hooks in drm_gpuvm_ops,
drm_gpuvm_sm_[un]map helpers, etc). Keeping them around just adds to the
confusion.


Xe allows bypassing the bind-queue with another bind-queue, but to 
completely avoid dependencies between queues the Operations may not 
overlap.  (And the definition of overlap is currently page-table 
structure updates may not overlap) but no guarantees are made about 
priority.


/Thomas





Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-13 Thread Thomas Hellström
Hi!

On Wed, 2023-09-13 at 01:36 +0200, Danilo Krummrich wrote:
> On Tue, Sep 12, 2023 at 09:23:08PM +0200, Thomas Hellström wrote:
> > 
> > On 9/12/23 18:50, Danilo Krummrich wrote:
> > > On Tue, Sep 12, 2023 at 06:20:32PM +0200, Thomas Hellström wrote:
> > > > Hi, Danilo,
> > > > 
> > > > On 9/9/23 17:31, Danilo Krummrich wrote:
> > > > > So far the DRM GPUVA manager offers common infrastructure to
> > > > > track GPU VA
> > > > > allocations and mappings, generically connect GPU VA mappings
> > > > > to their
> > > > > backing buffers and perform more complex mapping operations
> > > > > on the GPU VA
> > > > > space.
> > > > > 
> > > > > However, there are more design patterns commonly used by
> > > > > drivers, which
> > > > > can potentially be generalized in order to make the DRM GPUVA
> > > > > manager
> > > > > represent a basic GPU-VM implementation. In this context,
> > > > > this patch aims
> > > > > at generalizing the following elements.
> > > > > 
> > > > > 1) Provide a common dma-resv for GEM objects not being used
> > > > > outside of
> > > > >  this GPU-VM.
> > > > > 
> > > > > 2) Provide tracking of external GEM objects (GEM objects
> > > > > which are
> > > > >  shared with other GPU-VMs).
> > > > > 
> > > > > 3) Provide functions to efficiently lock all GEM objects dma-
> > > > > resv the
> > > > >  GPU-VM contains mappings of.
> > > > > 
> > > > > 4) Provide tracking of evicted GEM objects the GPU-VM
> > > > > contains mappings
> > > > >  of, such that validation of evicted GEM objects is
> > > > > accelerated.
> > > > > 
> > > > > 5) Provide some convinience functions for common patterns.
> > > > > 
> > > > > Rather than being designed as a "framework", the target is to
> > > > > make all
> > > > > features appear as a collection of optional helper functions,
> > > > > such that
> > > > > drivers are free to make use of the DRM GPUVA managers basic
> > > > > functionality and opt-in for other features without setting
> > > > > any feature
> > > > > flags, just by making use of the corresponding functions.
> > > > > 
> > > > > Big kudos to Boris Brezillon for his help to figure out
> > > > > locking for drivers
> > > > > updating the GPU VA space within the fence signalling path.
> > > > > 
> > > > > Suggested-by: Matthew Brost 
> > > > > Signed-off-by: Danilo Krummrich 
> > > > > ---
> > > > >    drivers/gpu/drm/drm_gpuvm.c | 516
> > > > > 
> > > > >    include/drm/drm_gpuvm.h | 197 ++
> > > > >    2 files changed, 713 insertions(+)
> > > > > 
> > > > > diff --git a/drivers/gpu/drm/drm_gpuvm.c
> > > > > b/drivers/gpu/drm/drm_gpuvm.c
> > > > > index f4411047dbb3..8e62a043f719 100644
> > > > > --- a/drivers/gpu/drm/drm_gpuvm.c
> > > > > +++ b/drivers/gpu/drm/drm_gpuvm.c
> > > > > @@ -73,6 +73,21 @@
> > > > >     * _gem_object list of _gpuvm_bos for an existing
> > > > > instance of this
> > > > >     * particular combination. If not existent a new instance
> > > > > is created and linked
> > > > >     * to the _gem_object.
> > > > > + *
> > > > > + * _gpuvm_bo structures, since unique for a given
> > > > > _gpuvm, are also used
> > > > > + * as entry for the _gpuvm's lists of external and
> > > > > evicted objects. Those
> > > > > + * list are maintained in order to accelerate locking of
> > > > > dma-resv locks and
> > > > > + * validation of evicted objects bound in a _gpuvm. For
> > > > > instance the all
> > > > > + * _gem_object's _resv of a given _gpuvm can be
> > > > > locked by calling
> > > > > + * drm_gpuvm_exec_lock(). Once locked drivers can call
> > > > > drm_gpuvm_validate() in
> > > > > + * order to validate all evicted _gem_objects. It is
> > > > > also possible to lock
> > > > > + * additional _gem_objects by providing the
> > > > > corresponding parameters to
> > > > > + * drm_gpuvm_exec_lock() as well as open code the _exec
> > > > > loop while making
> > > > > + * use of helper functions such as drm_gpuvm_prepare_range()
> > > > > or
> > > > > + * drm_gpuvm_prepare_objects().
> > > > > + *
> > > > > + * Every bound _gem_object is treated as external object
> > > > > when its _resv
> > > > > + * structure is different than the _gpuvm's common
> > > > > _resv structure.
> > > > >     */
> > > > >    /**
> > > > > @@ -420,6 +435,20 @@
> > > > >     * Subsequent calls to drm_gpuvm_bo_obtain() for the same
> > > > > _gpuvm and
> > > > >     * _gem_object must be able to observe previous
> > > > > creations and destructions
> > > > >     * of _gpuvm_bos in order to keep instances unique.
> > > > > + *
> > > > > + * The _gpuvm's lists for keeping track of external and
> > > > > evicted objects are
> > > > > + * protected against concurrent insertion / removal and
> > > > > iteration internally.
> > > > > + *
> > > > > + * However, drivers still need ensure to protect concurrent
> > > > > calls to functions
> > > > > + * iterating those lists, such as drm_gpuvm_validate() and
> > > > > + * 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-13 Thread Dave Airlie
On Wed, 13 Sept 2023 at 17:03, Boris Brezillon
 wrote:
>
> On Tue, 12 Sep 2023 18:20:32 +0200
> Thomas Hellström  wrote:
>
> > > +/**
> > > + * get_next_vm_bo_from_list() - get the next vm_bo element
> > > + * @__gpuvm: The GPU VM
> > > + * @__list_name: The name of the list we're iterating on
> > > + * @__local_list: A pointer to the local list used to store already 
> > > iterated items
> > > + * @__prev_vm_bo: The previous element we got from 
> > > drm_gpuvm_get_next_cached_vm_bo()
> > > + *
> > > + * This helper is here to provide lockless list iteration. Lockless as 
> > > in, the
> > > + * iterator releases the lock immediately after picking the first 
> > > element from
> > > + * the list, so list insertion deletion can happen concurrently.
> >
> > Are the list spinlocks needed for that async state update from within
> > the dma-fence critical section we've discussed previously?
>
> Any driver calling _[un]link() from its drm_gpu_scheduler::run_job()
> hook will be in this situation (Panthor at the moment, PowerVR soon). I
> get that Xe and Nouveau don't need that because they update the VM
> state early (in the ioctl path), but I keep thinking this will hurt us
> if we don't think it through from the beginning, because once you've
> set this logic to depend only on resv locks, it will be pretty hard to
> get back to a solution which lets synchronous VM_BINDs take precedence
> on asynchronous request, and, with vkQueueBindSparse() passing external
> deps (plus the fact the VM_BIND queue might be pretty deep), it can
> take a long time to get your synchronous VM_BIND executed...

btw what is the use case for this? do we have actual vulkan
applications we know will have problems here?

it feels like a bit of premature optimisation, but maybe we have use cases.

Dave.


Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-12 Thread Danilo Krummrich
On Tue, Sep 12, 2023 at 09:23:08PM +0200, Thomas Hellström wrote:
> 
> On 9/12/23 18:50, Danilo Krummrich wrote:
> > On Tue, Sep 12, 2023 at 06:20:32PM +0200, Thomas Hellström wrote:
> > > Hi, Danilo,
> > > 
> > > On 9/9/23 17:31, Danilo Krummrich wrote:
> > > > So far the DRM GPUVA manager offers common infrastructure to track GPU 
> > > > VA
> > > > allocations and mappings, generically connect GPU VA mappings to their
> > > > backing buffers and perform more complex mapping operations on the GPU 
> > > > VA
> > > > space.
> > > > 
> > > > However, there are more design patterns commonly used by drivers, which
> > > > can potentially be generalized in order to make the DRM GPUVA manager
> > > > represent a basic GPU-VM implementation. In this context, this patch 
> > > > aims
> > > > at generalizing the following elements.
> > > > 
> > > > 1) Provide a common dma-resv for GEM objects not being used outside of
> > > >  this GPU-VM.
> > > > 
> > > > 2) Provide tracking of external GEM objects (GEM objects which are
> > > >  shared with other GPU-VMs).
> > > > 
> > > > 3) Provide functions to efficiently lock all GEM objects dma-resv the
> > > >  GPU-VM contains mappings of.
> > > > 
> > > > 4) Provide tracking of evicted GEM objects the GPU-VM contains mappings
> > > >  of, such that validation of evicted GEM objects is accelerated.
> > > > 
> > > > 5) Provide some convinience functions for common patterns.
> > > > 
> > > > Rather than being designed as a "framework", the target is to make all
> > > > features appear as a collection of optional helper functions, such that
> > > > drivers are free to make use of the DRM GPUVA managers basic
> > > > functionality and opt-in for other features without setting any feature
> > > > flags, just by making use of the corresponding functions.
> > > > 
> > > > Big kudos to Boris Brezillon for his help to figure out locking for 
> > > > drivers
> > > > updating the GPU VA space within the fence signalling path.
> > > > 
> > > > Suggested-by: Matthew Brost 
> > > > Signed-off-by: Danilo Krummrich 
> > > > ---
> > > >drivers/gpu/drm/drm_gpuvm.c | 516 
> > > > 
> > > >include/drm/drm_gpuvm.h | 197 ++
> > > >2 files changed, 713 insertions(+)
> > > > 
> > > > diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
> > > > index f4411047dbb3..8e62a043f719 100644
> > > > --- a/drivers/gpu/drm/drm_gpuvm.c
> > > > +++ b/drivers/gpu/drm/drm_gpuvm.c
> > > > @@ -73,6 +73,21 @@
> > > > * _gem_object list of _gpuvm_bos for an existing instance 
> > > > of this
> > > > * particular combination. If not existent a new instance is created 
> > > > and linked
> > > > * to the _gem_object.
> > > > + *
> > > > + * _gpuvm_bo structures, since unique for a given _gpuvm, are 
> > > > also used
> > > > + * as entry for the _gpuvm's lists of external and evicted 
> > > > objects. Those
> > > > + * list are maintained in order to accelerate locking of dma-resv 
> > > > locks and
> > > > + * validation of evicted objects bound in a _gpuvm. For instance 
> > > > the all
> > > > + * _gem_object's _resv of a given _gpuvm can be locked by 
> > > > calling
> > > > + * drm_gpuvm_exec_lock(). Once locked drivers can call 
> > > > drm_gpuvm_validate() in
> > > > + * order to validate all evicted _gem_objects. It is also possible 
> > > > to lock
> > > > + * additional _gem_objects by providing the corresponding 
> > > > parameters to
> > > > + * drm_gpuvm_exec_lock() as well as open code the _exec loop while 
> > > > making
> > > > + * use of helper functions such as drm_gpuvm_prepare_range() or
> > > > + * drm_gpuvm_prepare_objects().
> > > > + *
> > > > + * Every bound _gem_object is treated as external object when its 
> > > > _resv
> > > > + * structure is different than the _gpuvm's common _resv 
> > > > structure.
> > > > */
> > > >/**
> > > > @@ -420,6 +435,20 @@
> > > > * Subsequent calls to drm_gpuvm_bo_obtain() for the same _gpuvm 
> > > > and
> > > > * _gem_object must be able to observe previous creations and 
> > > > destructions
> > > > * of _gpuvm_bos in order to keep instances unique.
> > > > + *
> > > > + * The _gpuvm's lists for keeping track of external and evicted 
> > > > objects are
> > > > + * protected against concurrent insertion / removal and iteration 
> > > > internally.
> > > > + *
> > > > + * However, drivers still need ensure to protect concurrent calls to 
> > > > functions
> > > > + * iterating those lists, such as drm_gpuvm_validate() and
> > > > + * drm_gpuvm_prepare_objects(). Every such function contains a 
> > > > particular
> > > > + * comment and lockdep checks if possible.
> > > > + *
> > > > + * Functions adding or removing entries from those lists, such as
> > > > + * drm_gpuvm_bo_evict() or drm_gpuvm_bo_extobj_add() may be called 
> > > > with external
> > > > + * locks being held, e.g. in order to avoid the corresponding list to 
> 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-12 Thread Thomas Hellström



On 9/12/23 18:50, Danilo Krummrich wrote:

On Tue, Sep 12, 2023 at 06:20:32PM +0200, Thomas Hellström wrote:

Hi, Danilo,

On 9/9/23 17:31, Danilo Krummrich wrote:

So far the DRM GPUVA manager offers common infrastructure to track GPU VA
allocations and mappings, generically connect GPU VA mappings to their
backing buffers and perform more complex mapping operations on the GPU VA
space.

However, there are more design patterns commonly used by drivers, which
can potentially be generalized in order to make the DRM GPUVA manager
represent a basic GPU-VM implementation. In this context, this patch aims
at generalizing the following elements.

1) Provide a common dma-resv for GEM objects not being used outside of
 this GPU-VM.

2) Provide tracking of external GEM objects (GEM objects which are
 shared with other GPU-VMs).

3) Provide functions to efficiently lock all GEM objects dma-resv the
 GPU-VM contains mappings of.

4) Provide tracking of evicted GEM objects the GPU-VM contains mappings
 of, such that validation of evicted GEM objects is accelerated.

5) Provide some convinience functions for common patterns.

Rather than being designed as a "framework", the target is to make all
features appear as a collection of optional helper functions, such that
drivers are free to make use of the DRM GPUVA managers basic
functionality and opt-in for other features without setting any feature
flags, just by making use of the corresponding functions.

Big kudos to Boris Brezillon for his help to figure out locking for drivers
updating the GPU VA space within the fence signalling path.

Suggested-by: Matthew Brost 
Signed-off-by: Danilo Krummrich 
---
   drivers/gpu/drm/drm_gpuvm.c | 516 
   include/drm/drm_gpuvm.h | 197 ++
   2 files changed, 713 insertions(+)

diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
index f4411047dbb3..8e62a043f719 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -73,6 +73,21 @@
* _gem_object list of _gpuvm_bos for an existing instance of this
* particular combination. If not existent a new instance is created and 
linked
* to the _gem_object.
+ *
+ * _gpuvm_bo structures, since unique for a given _gpuvm, are also used
+ * as entry for the _gpuvm's lists of external and evicted objects. Those
+ * list are maintained in order to accelerate locking of dma-resv locks and
+ * validation of evicted objects bound in a _gpuvm. For instance the all
+ * _gem_object's _resv of a given _gpuvm can be locked by calling
+ * drm_gpuvm_exec_lock(). Once locked drivers can call drm_gpuvm_validate() in
+ * order to validate all evicted _gem_objects. It is also possible to lock
+ * additional _gem_objects by providing the corresponding parameters to
+ * drm_gpuvm_exec_lock() as well as open code the _exec loop while making
+ * use of helper functions such as drm_gpuvm_prepare_range() or
+ * drm_gpuvm_prepare_objects().
+ *
+ * Every bound _gem_object is treated as external object when its _resv
+ * structure is different than the _gpuvm's common _resv structure.
*/
   /**
@@ -420,6 +435,20 @@
* Subsequent calls to drm_gpuvm_bo_obtain() for the same _gpuvm and
* _gem_object must be able to observe previous creations and 
destructions
* of _gpuvm_bos in order to keep instances unique.
+ *
+ * The _gpuvm's lists for keeping track of external and evicted objects are
+ * protected against concurrent insertion / removal and iteration internally.
+ *
+ * However, drivers still need ensure to protect concurrent calls to functions
+ * iterating those lists, such as drm_gpuvm_validate() and
+ * drm_gpuvm_prepare_objects(). Every such function contains a particular
+ * comment and lockdep checks if possible.
+ *
+ * Functions adding or removing entries from those lists, such as
+ * drm_gpuvm_bo_evict() or drm_gpuvm_bo_extobj_add() may be called with 
external
+ * locks being held, e.g. in order to avoid the corresponding list to be
+ * (safely) modified while potentially being iternated by other API functions.
+ * However, this is entirely optional.
*/
   /**
@@ -632,6 +661,131 @@
*   }
*/
+/**
+ * get_next_vm_bo_from_list() - get the next vm_bo element
+ * @__gpuvm: The GPU VM
+ * @__list_name: The name of the list we're iterating on
+ * @__local_list: A pointer to the local list used to store already iterated 
items
+ * @__prev_vm_bo: The previous element we got from 
drm_gpuvm_get_next_cached_vm_bo()
+ *
+ * This helper is here to provide lockless list iteration. Lockless as in, the
+ * iterator releases the lock immediately after picking the first element from
+ * the list, so list insertion deletion can happen concurrently.

Are the list spinlocks needed for that async state update from within the
dma-fence critical section we've discussed previously?

Yes, but also for other reasons, see below.


Otherwise it should be sufficient to protect the lists 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-12 Thread Danilo Krummrich
On Tue, Sep 12, 2023 at 06:20:32PM +0200, Thomas Hellström wrote:
> Hi, Danilo,
> 
> On 9/9/23 17:31, Danilo Krummrich wrote:
> > So far the DRM GPUVA manager offers common infrastructure to track GPU VA
> > allocations and mappings, generically connect GPU VA mappings to their
> > backing buffers and perform more complex mapping operations on the GPU VA
> > space.
> > 
> > However, there are more design patterns commonly used by drivers, which
> > can potentially be generalized in order to make the DRM GPUVA manager
> > represent a basic GPU-VM implementation. In this context, this patch aims
> > at generalizing the following elements.
> > 
> > 1) Provide a common dma-resv for GEM objects not being used outside of
> > this GPU-VM.
> > 
> > 2) Provide tracking of external GEM objects (GEM objects which are
> > shared with other GPU-VMs).
> > 
> > 3) Provide functions to efficiently lock all GEM objects dma-resv the
> > GPU-VM contains mappings of.
> > 
> > 4) Provide tracking of evicted GEM objects the GPU-VM contains mappings
> > of, such that validation of evicted GEM objects is accelerated.
> > 
> > 5) Provide some convinience functions for common patterns.
> > 
> > Rather than being designed as a "framework", the target is to make all
> > features appear as a collection of optional helper functions, such that
> > drivers are free to make use of the DRM GPUVA managers basic
> > functionality and opt-in for other features without setting any feature
> > flags, just by making use of the corresponding functions.
> > 
> > Big kudos to Boris Brezillon for his help to figure out locking for drivers
> > updating the GPU VA space within the fence signalling path.
> > 
> > Suggested-by: Matthew Brost 
> > Signed-off-by: Danilo Krummrich 
> > ---
> >   drivers/gpu/drm/drm_gpuvm.c | 516 
> >   include/drm/drm_gpuvm.h | 197 ++
> >   2 files changed, 713 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
> > index f4411047dbb3..8e62a043f719 100644
> > --- a/drivers/gpu/drm/drm_gpuvm.c
> > +++ b/drivers/gpu/drm/drm_gpuvm.c
> > @@ -73,6 +73,21 @@
> >* _gem_object list of _gpuvm_bos for an existing instance of this
> >* particular combination. If not existent a new instance is created and 
> > linked
> >* to the _gem_object.
> > + *
> > + * _gpuvm_bo structures, since unique for a given _gpuvm, are also 
> > used
> > + * as entry for the _gpuvm's lists of external and evicted objects. 
> > Those
> > + * list are maintained in order to accelerate locking of dma-resv locks and
> > + * validation of evicted objects bound in a _gpuvm. For instance the 
> > all
> > + * _gem_object's _resv of a given _gpuvm can be locked by 
> > calling
> > + * drm_gpuvm_exec_lock(). Once locked drivers can call 
> > drm_gpuvm_validate() in
> > + * order to validate all evicted _gem_objects. It is also possible to 
> > lock
> > + * additional _gem_objects by providing the corresponding parameters to
> > + * drm_gpuvm_exec_lock() as well as open code the _exec loop while 
> > making
> > + * use of helper functions such as drm_gpuvm_prepare_range() or
> > + * drm_gpuvm_prepare_objects().
> > + *
> > + * Every bound _gem_object is treated as external object when its 
> > _resv
> > + * structure is different than the _gpuvm's common _resv structure.
> >*/
> >   /**
> > @@ -420,6 +435,20 @@
> >* Subsequent calls to drm_gpuvm_bo_obtain() for the same _gpuvm and
> >* _gem_object must be able to observe previous creations and 
> > destructions
> >* of _gpuvm_bos in order to keep instances unique.
> > + *
> > + * The _gpuvm's lists for keeping track of external and evicted 
> > objects are
> > + * protected against concurrent insertion / removal and iteration 
> > internally.
> > + *
> > + * However, drivers still need ensure to protect concurrent calls to 
> > functions
> > + * iterating those lists, such as drm_gpuvm_validate() and
> > + * drm_gpuvm_prepare_objects(). Every such function contains a particular
> > + * comment and lockdep checks if possible.
> > + *
> > + * Functions adding or removing entries from those lists, such as
> > + * drm_gpuvm_bo_evict() or drm_gpuvm_bo_extobj_add() may be called with 
> > external
> > + * locks being held, e.g. in order to avoid the corresponding list to be
> > + * (safely) modified while potentially being iternated by other API 
> > functions.
> > + * However, this is entirely optional.
> >*/
> >   /**
> > @@ -632,6 +661,131 @@
> >*}
> >*/
> > +/**
> > + * get_next_vm_bo_from_list() - get the next vm_bo element
> > + * @__gpuvm: The GPU VM
> > + * @__list_name: The name of the list we're iterating on
> > + * @__local_list: A pointer to the local list used to store already 
> > iterated items
> > + * @__prev_vm_bo: The previous element we got from 
> > drm_gpuvm_get_next_cached_vm_bo()
> > + *
> > + * This helper is here to provide lockless list 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-12 Thread Thomas Hellström

Hi, Danilo,

On 9/9/23 17:31, Danilo Krummrich wrote:

So far the DRM GPUVA manager offers common infrastructure to track GPU VA
allocations and mappings, generically connect GPU VA mappings to their
backing buffers and perform more complex mapping operations on the GPU VA
space.

However, there are more design patterns commonly used by drivers, which
can potentially be generalized in order to make the DRM GPUVA manager
represent a basic GPU-VM implementation. In this context, this patch aims
at generalizing the following elements.

1) Provide a common dma-resv for GEM objects not being used outside of
this GPU-VM.

2) Provide tracking of external GEM objects (GEM objects which are
shared with other GPU-VMs).

3) Provide functions to efficiently lock all GEM objects dma-resv the
GPU-VM contains mappings of.

4) Provide tracking of evicted GEM objects the GPU-VM contains mappings
of, such that validation of evicted GEM objects is accelerated.

5) Provide some convinience functions for common patterns.

Rather than being designed as a "framework", the target is to make all
features appear as a collection of optional helper functions, such that
drivers are free to make use of the DRM GPUVA managers basic
functionality and opt-in for other features without setting any feature
flags, just by making use of the corresponding functions.

Big kudos to Boris Brezillon for his help to figure out locking for drivers
updating the GPU VA space within the fence signalling path.

Suggested-by: Matthew Brost 
Signed-off-by: Danilo Krummrich 
---
  drivers/gpu/drm/drm_gpuvm.c | 516 
  include/drm/drm_gpuvm.h | 197 ++
  2 files changed, 713 insertions(+)

diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c
index f4411047dbb3..8e62a043f719 100644
--- a/drivers/gpu/drm/drm_gpuvm.c
+++ b/drivers/gpu/drm/drm_gpuvm.c
@@ -73,6 +73,21 @@
   * _gem_object list of _gpuvm_bos for an existing instance of this
   * particular combination. If not existent a new instance is created and 
linked
   * to the _gem_object.
+ *
+ * _gpuvm_bo structures, since unique for a given _gpuvm, are also used
+ * as entry for the _gpuvm's lists of external and evicted objects. Those
+ * list are maintained in order to accelerate locking of dma-resv locks and
+ * validation of evicted objects bound in a _gpuvm. For instance the all
+ * _gem_object's _resv of a given _gpuvm can be locked by calling
+ * drm_gpuvm_exec_lock(). Once locked drivers can call drm_gpuvm_validate() in
+ * order to validate all evicted _gem_objects. It is also possible to lock
+ * additional _gem_objects by providing the corresponding parameters to
+ * drm_gpuvm_exec_lock() as well as open code the _exec loop while making
+ * use of helper functions such as drm_gpuvm_prepare_range() or
+ * drm_gpuvm_prepare_objects().
+ *
+ * Every bound _gem_object is treated as external object when its _resv
+ * structure is different than the _gpuvm's common _resv structure.
   */
  
  /**

@@ -420,6 +435,20 @@
   * Subsequent calls to drm_gpuvm_bo_obtain() for the same _gpuvm and
   * _gem_object must be able to observe previous creations and destructions
   * of _gpuvm_bos in order to keep instances unique.
+ *
+ * The _gpuvm's lists for keeping track of external and evicted objects are
+ * protected against concurrent insertion / removal and iteration internally.
+ *
+ * However, drivers still need ensure to protect concurrent calls to functions
+ * iterating those lists, such as drm_gpuvm_validate() and
+ * drm_gpuvm_prepare_objects(). Every such function contains a particular
+ * comment and lockdep checks if possible.
+ *
+ * Functions adding or removing entries from those lists, such as
+ * drm_gpuvm_bo_evict() or drm_gpuvm_bo_extobj_add() may be called with 
external
+ * locks being held, e.g. in order to avoid the corresponding list to be
+ * (safely) modified while potentially being iternated by other API functions.
+ * However, this is entirely optional.
   */
  
  /**

@@ -632,6 +661,131 @@
   *}
   */
  
+/**

+ * get_next_vm_bo_from_list() - get the next vm_bo element
+ * @__gpuvm: The GPU VM
+ * @__list_name: The name of the list we're iterating on
+ * @__local_list: A pointer to the local list used to store already iterated 
items
+ * @__prev_vm_bo: The previous element we got from 
drm_gpuvm_get_next_cached_vm_bo()
+ *
+ * This helper is here to provide lockless list iteration. Lockless as in, the
+ * iterator releases the lock immediately after picking the first element from
+ * the list, so list insertion deletion can happen concurrently.


Are the list spinlocks needed for that async state update from within 
the dma-fence critical section we've discussed previously?


Otherwise it should be sufficient to protect the lists with the gpuvm's 
resv (or for the extobj list with an outer lock).


If those spinlocks are still needed in some situations, perhaps could we 
have an option to 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-11 Thread Danilo Krummrich
On Mon, Sep 11, 2023 at 04:45:26PM +0200, Boris Brezillon wrote:
> On Sat,  9 Sep 2023 17:31:13 +0200
> Danilo Krummrich  wrote:
> 
> > @@ -807,6 +1262,14 @@ drm_gpuvm_bo_destroy(struct kref *kref)
> >  
> > drm_gem_gpuva_assert_lock_held(vm_bo->obj);
> >  
> > +   spin_lock(>extobj.lock);
> > +   list_del(_bo->list.entry.extobj);
> > +   spin_unlock(>extobj.lock);
> > +
> > +   spin_lock(>evict.lock);
> > +   list_del(_bo->list.entry.evict);
> > +   spin_unlock(>evict.lock);
> > +
> > list_del(_bo->list.entry.gem);
> >  
> > drm_gem_object_put(obj);
> 
> I ran into a UAF situation when the drm_gpuvm_bo object is the last
> owner of obj, because the lock that's supposed to be held when calling
> this function (drm_gem_gpuva_assert_lock_held() call above), belongs to
> obj (either obj->resv, or a driver specific lock that's attached to the
> driver-specific GEM object). I worked around it by taking a ref to obj
> before calling lock()+drm_gpuvm_bo_put()+unlock(), and releasing it
> after I'm node with the lock, but that just feels wrong.
> 
As mentioned in a previous reply, I think we want to bring the dedicated GEM
gpuva list lock back instead of abusing the dma-resv lock. This way we can
handle locking internally and don't run into such issues.

There is also no reason for a driver to already hold the GEM gpuva list lock
when when calling drm_gpuvm_bo_put(). Drivers would only acquire the lock to
iterate the GEMs list of drm_gpuvm_bos or the drm_gpuvm_bos list of drm_gpuvas.
And dropping the drm_gpuvm_bo from within such a loop is forbidden anyways.



Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-11 Thread Danilo Krummrich
On Mon, Sep 11, 2023 at 12:35:26PM +0200, Boris Brezillon wrote:
> Hello Danilo,
> 
> On Sat,  9 Sep 2023 17:31:13 +0200
> Danilo Krummrich  wrote:
> 
> 
> > @@ -632,6 +661,131 @@
> >   * }
> >   */
> >  
> > +/**
> > + * get_next_vm_bo_from_list() - get the next vm_bo element
> > + * @__gpuvm: The GPU VM
> > + * @__list_name: The name of the list we're iterating on
> > + * @__local_list: A pointer to the local list used to store already 
> > iterated items
> > + * @__prev_vm_bo: The previous element we got from 
> > drm_gpuvm_get_next_cached_vm_bo()
> > + *
> > + * This helper is here to provide lockless list iteration. Lockless as in, 
> > the
> > + * iterator releases the lock immediately after picking the first element 
> > from
> > + * the list, so list insertion deletion can happen concurrently.
> > + *
> > + * Elements popped from the original list are kept in a local list, so 
> > removal
> > + * and is_empty checks can still happen while we're iterating the list.
> > + */
> > +#define get_next_vm_bo_from_list(__gpuvm, __list_name, __local_list, 
> > __prev_vm_bo) \
> > +   ({  
> > \
> > +   struct drm_gpuvm_bo *__vm_bo;   
> > \
> > +   
> > \
> > +   drm_gpuvm_bo_put(__prev_vm_bo); 
> > \
> > +   
> > \
> > +   spin_lock(&(__gpuvm)->__list_name.lock);
> > \
> 
> I'm tempted to add a drm_gpuvmlocal_list field, so we
> can catch concurrent iterations with something like:
> 
>   if (!(__gpuvm)->__list_name.local_list)
>   (__gpuvm)->__list_name.local_list = __local_list;
>   else
>   WARN_ON((__gpuvm)->__list_name.local_list != 
> __local_list);
> 
> with (__gpuvm)->__list_name.local_list being restored to NULL
> in restore_vm_bo_list().
> 
> > +   while (!list_empty(&(__gpuvm)->__list_name.list)) { 
> > \
> > +   __vm_bo = 
> > list_first_entry(&(__gpuvm)->__list_name.list,\
> > +  struct drm_gpuvm_bo, 
> > \
> > +  list.entry.__list_name); 
> > \
> > +   if (drm_gpuvm_bo_get_unless_zero(__vm_bo)) {
> > \
> > +   
> > list_move_tail(&(__vm_bo)->list.entry.__list_name,  \
> > +  __local_list);   
> > \
> > +   break;  
> > \
> > +   } else {
> > \
> > +   
> > list_del_init(&(__vm_bo)->list.entry.__list_name);  \
> > +   __vm_bo = NULL; 
> > \
> > +   }   
> > \
> > +   }   
> > \
> > +   spin_unlock(&(__gpuvm)->__list_name.lock);  
> > \
> > +   
> > \
> > +   __vm_bo;
> > \
> > +   })
> > +
> > +/**
> > + * for_each_vm_bo_in_list() - internal vm_bo list iterator
> > + *
> > + * This helper is here to provide lockless list iteration. Lockless as in, 
> > the
> > + * iterator releases the lock immediately after picking the first element 
> > from the
> > + * list, so list insertion and deletion can happen concurrently.
> > + *
> > + * Typical use:
> > + *
> > + * struct drm_gpuvm_bo *vm_bo;
> > + * LIST_HEAD(my_local_list);
> > + *
> > + * ret = 0;
> > + * drm_gpuvm_for_each_vm_bo(gpuvm, , _local_list, vm_bo) {
> > + * ret = do_something_with_vm_bo(..., vm_bo);
> > + * if (ret)
> > + * break;
> > + * }
> > + * drm_gpuvm_bo_put(vm_bo);
> > + * drm_gpuvm_restore_vm_bo_list(gpuvm, , _local_list);
> 
> The names in this example and the helper names don't match.
> 
> > + *
> > + *
> > + * Only used for internal list iterations, not meant to be exposed to the 
> > outside
> > + * world.
> > + */
> > +#define for_each_vm_bo_in_list(__gpuvm, __list_name, __local_list, 
> > __vm_bo)\
> > +   for (__vm_bo = get_next_vm_bo_from_list(__gpuvm, __list_name,   
> > \
> > +   __local_list, NULL);
> > \
> > +__vm_bo;   
> > \
> > +__vm_bo = 

Re: [Nouveau] [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize dma_resv/extobj handling and GEM validation

2023-09-09 Thread kernel test robot
Hi Danilo,

kernel test robot noticed the following build warnings:

[auto build test WARNING on 6bd3d8da51ca1ec97c724016466606aec7739b9f]

url:
https://github.com/intel-lab-lkp/linux/commits/Danilo-Krummrich/drm-gpuvm-rename-struct-drm_gpuva_manager-to-struct-drm_gpuvm/20230909-233346
base:   6bd3d8da51ca1ec97c724016466606aec7739b9f
patch link:
https://lore.kernel.org/r/20230909153125.30032-7-dakr%40redhat.com
patch subject: [PATCH drm-misc-next v3 6/7] drm/gpuvm: generalize 
dma_resv/extobj handling and GEM validation
config: riscv-defconfig 
(https://download.01.org/0day-ci/archive/20230910/202309100424.unxgr9d4-...@intel.com/config)
compiler: riscv64-linux-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): 
(https://download.01.org/0day-ci/archive/20230910/202309100424.unxgr9d4-...@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot 
| Closes: 
https://lore.kernel.org/oe-kbuild-all/202309100424.unxgr9d4-...@intel.com/

All warnings (new ones prefixed by >>):

>> drivers/gpu/drm/drm_gpuvm.c:734: warning: Function parameter or member 
>> '__gpuvm' not described in 'for_each_vm_bo_in_list'
>> drivers/gpu/drm/drm_gpuvm.c:734: warning: Function parameter or member 
>> '__list_name' not described in 'for_each_vm_bo_in_list'
>> drivers/gpu/drm/drm_gpuvm.c:734: warning: Function parameter or member 
>> '__local_list' not described in 'for_each_vm_bo_in_list'
>> drivers/gpu/drm/drm_gpuvm.c:734: warning: Function parameter or member 
>> '__vm_bo' not described in 'for_each_vm_bo_in_list'


vim +734 drivers/gpu/drm/drm_gpuvm.c

32  
33  /**
34   * DOC: Overview
35   *
36   * The DRM GPU VA Manager, represented by struct drm_gpuvm keeps track 
of a
37   * GPU's virtual address (VA) space and manages the corresponding 
virtual
38   * mappings represented by _gpuva objects. It also keeps track of 
the
39   * mapping's backing _gem_object buffers.
40   *
41   * _gem_object buffers maintain a list of _gpuva objects 
representing
42   * all existent GPU VA mappings using this _gem_object as backing 
buffer.
43   *
44   * GPU VAs can be flagged as sparse, such that drivers may use GPU VAs 
to also
45   * keep track of sparse PTEs in order to support Vulkan 'Sparse 
Resources'.
46   *
47   * The GPU VA manager internally uses a rb-tree to manage the
48   * _gpuva mappings within a GPU's virtual address space.
49   *
50   * The _gpuvm structure contains a special _gpuva representing 
the
51   * portion of VA space reserved by the kernel. This node is initialized 
together
52   * with the GPU VA manager instance and removed when the GPU VA manager 
is
53   * destroyed.
54   *
55   * In a typical application drivers would embed struct drm_gpuvm and
56   * struct drm_gpuva within their own driver specific structures, there 
won't be
57   * any memory allocations of its own nor memory allocations of 
_gpuva
58   * entries.
59   *
60   * The data structures needed to store _gpuvas within the 
_gpuvm are
61   * contained within struct drm_gpuva already. Hence, for inserting 
_gpuva
62   * entries from within dma-fence signalling critical sections it is 
enough to
63   * pre-allocate the _gpuva structures.
64   *
65   * In order to connect a struct drm_gpuva its backing _gem_object 
each
66   * _gem_object maintains a list of _gpuvm_bo structures, and 
each
67   * _gpuvm_bo contains a list of &_gpuva structures.
68   *
69   * A _gpuvm_bo is an abstraction that represents a combination of a
70   * _gpuvm and a _gem_object. Every such combination should be 
unique.
71   * This is ensured by the API through drm_gpuvm_bo_obtain() and
72   * drm_gpuvm_bo_obtain_prealloc() which first look into the 
corresponding
73   * _gem_object list of _gpuvm_bos for an existing instance of 
this
74   * particular combination. If not existent a new instance is created 
and linked
75   * to the _gem_object.
76   *
77   * _gpuvm_bo structures, since unique for a given _gpuvm, are 
also used
78   * as entry for the _gpuvm's lists of external and evicted objects. 
Those
79   * list are maintained in order to accelerate locking of dma-resv locks 
and
80   * validation of evicted objects bound in a _gpuvm. For instance 
the all
81   * _gem_object's _resv of a given _gpuvm can be locked by 
calling
82   * drm_gpuvm_exec_lock(). Once locked drivers can call 
drm_gpuvm_validate() in
83   * order to validate all evicted _gem_objects. It is also possible 
to lock
84   * additional _gem_objects by providing the corresponding 
parameters to
85   * drm_gpuvm_exec_lock() as well as open code the _exec loop while 
making
86   * use of helper functions such as drm_gpuvm_prepare_range() or
87   *