I asked you how you keep the access of base.moved is safely in last
thread,
Sorry, I've missed that question somehow.
I check it just now, it depends on the shared resv lock.
Actually that's not 100% correct. The moved member is protected by the
BOs resv lock.
That can be the shared one (in the case of PDs/PTs/local BOs), but can
also be a separate lock.
I find most of all vm lists are protected by the shared resv lock,
whether the status lock can be removed as well? seems everything of vm
is protected by that big resv lock.
Nope, that won't work. When amdgpu_vm_bo_invalidate() is called only the
BO itself is locked, but not something from the VM.
So we need a separate lock to protect this list.
Thanks for the review,
Christian.
Am 29.08.2017 um 04:13 schrieb zhoucm1:
Hi Christian,
I asked you how you keep the access of base.moved is safely in last
thread, I check it just now, it depends on the shared resv lock.
For patch itself, Reviewed-by: Chunming Zhou <[email protected]>
another question comes up to me, I find most of all vm lists are
protected by the shared resv lock, whether the status lock can be
removed as well? seems everything of vm is protected by that big resv
lock.
Regards,
David Zhou
On 2017年08月29日 02:50, Christian König wrote:
From: Christian König <[email protected]>
Instead of using the vm_state use a separate flag to note
that the BO was moved.
v2: reorder patches to avoid temporary lockless access
Signed-off-by: Christian König <[email protected]>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 13 ++++++++++---
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 3 +++
2 files changed, 13 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index f621dba..ee53293 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -1787,10 +1787,16 @@ int amdgpu_vm_bo_update(struct amdgpu_device
*adev,
else
flags = 0x0;
- spin_lock(&vm->status_lock);
- if (!list_empty(&bo_va->base.vm_status))
+ if (!clear && bo_va->base.moved) {
+ bo_va->base.moved = false;
list_splice_init(&bo_va->valids, &bo_va->invalids);
- spin_unlock(&vm->status_lock);
+
+ } else {
+ spin_lock(&vm->status_lock);
+ if (!list_empty(&bo_va->base.vm_status))
+ list_splice_init(&bo_va->valids, &bo_va->invalids);
+ spin_unlock(&vm->status_lock);
+ }
list_for_each_entry(mapping, &bo_va->invalids, list) {
r = amdgpu_vm_bo_split_mapping(adev, exclusive, pages_addr,
vm,
@@ -2418,6 +2424,7 @@ void amdgpu_vm_bo_invalidate(struct
amdgpu_device *adev,
struct amdgpu_vm_bo_base *bo_base;
list_for_each_entry(bo_base, &bo->va, bo_list) {
+ bo_base->moved = true;
spin_lock(&bo_base->vm->status_lock);
if (list_empty(&bo_base->vm_status))
list_add(&bo_base->vm_status,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
index 9347d28..1b478e6 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
@@ -105,6 +105,9 @@ struct amdgpu_vm_bo_base {
/* protected by spinlock */
struct list_head vm_status;
+
+ /* protected by the BO being reserved */
+ bool moved;
};
struct amdgpu_vm_pt {
_______________________________________________
amd-gfx mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/amd-gfx