Re: [Intel-gfx] [PATCH v2 06/21] drm/i915/gt: Batch TLB invalidations
On 28/07/2022 07:32, Mauro Carvalho Chehab wrote: On Wed, 27 Jul 2022 13:56:50 +0100 Tvrtko Ursulin wrote: Because vma_invalidate_tlb() basically stores a TLB seqno, but the actual invalidation is deferred to when the pages are unset, at __i915_gem_object_unset_pages(). So, what happens is: - on VMA sync mode, the need to invalidate TLB is marked at __vma_put_pages(), before VMA unbind; - on async, this is deferred to happen at ppgtt_unbind_vma(), where it marks the need to invalidate TLBs. On both cases, __i915_gem_object_unset_pages() is called later, when the driver is ready to unmap the page. Sorry still not clear to me why is the patch moving marking of the need to invalidate (regardless if it a bit like today, or a seqno like in this patch) from bind to unbind? What if the seqno was stored in i915_vma_bind, where the bit is set today, and all the hunks which touch the unbind and evict would disappear from the patch. What wouldn't work in that case, if anything? Ah, now I see your point. I can't see any sense on having a sequence number at VMA bind, as the unbind order can be different. The need of doing a full TLB invalidation or not depends on the unbind order. Sorry yes that was stupid from me.. What I was really thinking was the approach I initially used for coalescing. Keeping the set_bit in bind and then once the code enters intel_gt_invalidate_tlbs, takes a "ticket" and waits on the mutex. Once it gets the mutex checks the ticket against the GT copy and if two invalidations have passed since it was waiting on the mutex it can immediately exit. That would seem like a minimal improvement to batch things up. But I guess it would still emit needless invalidations if there is no contention, just a stream of serialized put pages. While the approach from this patch can skip all but truly required. Okay, go for it and thanks for the explanations. Acked-by: Tvrtko Ursulin Regards, Tvrtko P.S. The last remaining "ugliness" is the 2nd call to invalidation from evict. It would be nicer if there was a single common place to do it on vma unbind but okay, I do not plan to dig into it so fine. The way the current algorithm works is that drm_i915_gem_object can be created on any order, and, at unbind/evict, they receive a seqno. The seqno is incremented at intel_gt_invalidate_tlb(): void intel_gt_invalidate_tlb(struct intel_gt *gt, u32 seqno) { with_intel_gt_pm_if_awake(gt, wakeref) { mutex_lock(>tlb.invalidate_lock); if (tlb_seqno_passed(gt, seqno)) goto unlock; mmio_invalidate_full(gt); write_seqcount_invalidate(>tlb.seqno); // increment seqno So, let's say 3 objects were created, on this order: obj1 obj2 obj3 They would be unbind/evict on a different order. On that time, the mm.tlb will be stamped with a seqno, using the number from the last TLB flush, plus 1. As different threads can be used to handle TLB flushes, let's imagine two threads (just for the sake of having an example). On such case, what we would have is: seqno Thread 0Thread 1 seqno=2 unbind/evict event obj3.mm.tlb = seqno | 1 seqno=2 unbind/evict event obj1.mm.tlb = seqno | 1 __i915_gem_object_unset_pages() called for obj3, TLB flush happened, invalidating both obj1 and obj2. seqno += 2 seqno=4 unbind/evict event obj1.mm.tlb = seqno | 1 __i915_gem_object_unset_pages() called for obj1, don't flush. ... __i915_gem_object_unset_pages() called for obj2, TLB flush happened seqno += 2 seqno=6 So, basically the seqno is used to track when the object data stopped being updated, because of an unbind/evict event, being later used by intel_gt_invalidate_tlb() when called from __i915_gem_object_unset_pages(), in order to check if a previous invalidation call was enough to invalidate the object, or if a new call is needed. Now, if seqno is stored at bind, data can still leak, as the assumption made by intel_gt_invalidate_tlb() that the data stopped being used at seqno is not true anymore. Still, I agree that this logic is complex and should be better documented. So, if you're now OK with this patch, I'll add the above explanation inside a kernel-doc comment. Regards, Mauro
Re: [Intel-gfx] [PATCH v2 06/21] drm/i915/gt: Batch TLB invalidations
On Thu, 28 Jul 2022 08:32:32 +0200 Mauro Carvalho Chehab wrote: > On Wed, 27 Jul 2022 13:56:50 +0100 > Tvrtko Ursulin wrote: > > > > Because vma_invalidate_tlb() basically stores a TLB seqno, but the > > > actual invalidation is deferred to when the pages are unset, at > > > __i915_gem_object_unset_pages(). > > > > > > So, what happens is: > > > > > > - on VMA sync mode, the need to invalidate TLB is marked at > > >__vma_put_pages(), before VMA unbind; > > > - on async, this is deferred to happen at ppgtt_unbind_vma(), where > > >it marks the need to invalidate TLBs. > > > > > > On both cases, __i915_gem_object_unset_pages() is called later, > > > when the driver is ready to unmap the page. > > > > Sorry still not clear to me why is the patch moving marking of the need > > to invalidate (regardless if it a bit like today, or a seqno like in > > this patch) from bind to unbind? > > > > What if the seqno was stored in i915_vma_bind, where the bit is set > > today, and all the hunks which touch the unbind and evict would > > disappear from the patch. What wouldn't work in that case, if anything? > > Ah, now I see your point. > > I can't see any sense on having a sequence number at VMA bind, as the > unbind order can be different. The need of doing a full TLB invalidation > or not depends on the unbind order. > > The way the current algorithm works is that drm_i915_gem_object can be > created on any order, and, at unbind/evict, they receive a seqno. > > The seqno is incremented at intel_gt_invalidate_tlb(): > > void intel_gt_invalidate_tlb(struct intel_gt *gt, u32 seqno) > { > with_intel_gt_pm_if_awake(gt, wakeref) { > mutex_lock(>tlb.invalidate_lock); > if (tlb_seqno_passed(gt, seqno)) > goto unlock; > > mmio_invalidate_full(gt); > > write_seqcount_invalidate(>tlb.seqno); // increment > seqno > > > So, let's say 3 objects were created, on this order: > > obj1 > obj2 > obj3 > > They would be unbind/evict on a different order. On that time, > the mm.tlb will be stamped with a seqno, using the number from the > last TLB flush, plus 1. > > As different threads can be used to handle TLB flushes, let's imagine > two threads (just for the sake of having an example). On such case, > what we would have is: > > seqno Thread 0Thread 1 > > seqno=2 unbind/evict event > obj3.mm.tlb = seqno | 1 > seqno=2 unbind/evict event > obj1.mm.tlb = seqno | 1 > __i915_gem_object_unset_pages() > called for obj3, TLB flush > happened, > invalidating both obj1 and obj2. > seqno += 2 > > seqno=4 unbind/evict event > obj1.mm.tlb = seqno | 1 cut-and-paste typo. it should be, instead: obj2.mm.tlb = seqno | 1 > __i915_gem_object_unset_pages() > called for obj1, don't flush. > ... > __i915_gem_object_unset_pages() > called for obj2, TLB flush happened > seqno += 2 > seqno=6 > > So, basically the seqno is used to track when the object data stopped > being updated, because of an unbind/evict event, being later used by > intel_gt_invalidate_tlb() when called from __i915_gem_object_unset_pages(), > in order to check if a previous invalidation call was enough to invalidate > the object, or if a new call is needed. > > Now, if seqno is stored at bind, data can still leak, as the assumption > made by intel_gt_invalidate_tlb() that the data stopped being used at > seqno is not true anymore. > > Still, I agree that this logic is complex and should be better > documented. So, if you're now OK with this patch, I'll add the above > explanation inside a kernel-doc comment. I'm enclosing the kernel-doc patch (to be applied after moving the code into its own files: intel_tlb.c/intel_tlb.h): [PATCH] drm/i915/gt: document TLB cache invalidation functions Add a description for the TLB cache invalidation algorithm and for the related kAPI functions. Signed-off-by: Mauro Carvalho Chehab diff --git a/drivers/gpu/drm/i915/gt/intel_tlb.c b/drivers/gpu/drm/i915/gt/intel_tlb.c index af8cae979489..8eda0743da74 100644 --- a/drivers/gpu/drm/i915/gt/intel_tlb.c +++ b/drivers/gpu/drm/i915/gt/intel_tlb.c @@ -145,6 +145,18 @@ static void mmio_invalidate_full(struct intel_gt *gt) intel_uncore_forcewake_put_delayed(uncore, FORCEWAKE_ALL); } +/** + * intel_gt_invalidate_tlb_full - do full TLB cache invalidation + * @gt: GT structure + *
Re: [Intel-gfx] [PATCH v2 06/21] drm/i915/gt: Batch TLB invalidations
On Wed, 27 Jul 2022 13:56:50 +0100 Tvrtko Ursulin wrote: > > Because vma_invalidate_tlb() basically stores a TLB seqno, but the > > actual invalidation is deferred to when the pages are unset, at > > __i915_gem_object_unset_pages(). > > > > So, what happens is: > > > > - on VMA sync mode, the need to invalidate TLB is marked at > >__vma_put_pages(), before VMA unbind; > > - on async, this is deferred to happen at ppgtt_unbind_vma(), where > >it marks the need to invalidate TLBs. > > > > On both cases, __i915_gem_object_unset_pages() is called later, > > when the driver is ready to unmap the page. > > Sorry still not clear to me why is the patch moving marking of the need > to invalidate (regardless if it a bit like today, or a seqno like in > this patch) from bind to unbind? > > What if the seqno was stored in i915_vma_bind, where the bit is set > today, and all the hunks which touch the unbind and evict would > disappear from the patch. What wouldn't work in that case, if anything? Ah, now I see your point. I can't see any sense on having a sequence number at VMA bind, as the unbind order can be different. The need of doing a full TLB invalidation or not depends on the unbind order. The way the current algorithm works is that drm_i915_gem_object can be created on any order, and, at unbind/evict, they receive a seqno. The seqno is incremented at intel_gt_invalidate_tlb(): void intel_gt_invalidate_tlb(struct intel_gt *gt, u32 seqno) { with_intel_gt_pm_if_awake(gt, wakeref) { mutex_lock(>tlb.invalidate_lock); if (tlb_seqno_passed(gt, seqno)) goto unlock; mmio_invalidate_full(gt); write_seqcount_invalidate(>tlb.seqno); // increment seqno So, let's say 3 objects were created, on this order: obj1 obj2 obj3 They would be unbind/evict on a different order. On that time, the mm.tlb will be stamped with a seqno, using the number from the last TLB flush, plus 1. As different threads can be used to handle TLB flushes, let's imagine two threads (just for the sake of having an example). On such case, what we would have is: seqno Thread 0Thread 1 seqno=2 unbind/evict event obj3.mm.tlb = seqno | 1 seqno=2 unbind/evict event obj1.mm.tlb = seqno | 1 __i915_gem_object_unset_pages() called for obj3, TLB flush happened, invalidating both obj1 and obj2. seqno += 2 seqno=4 unbind/evict event obj1.mm.tlb = seqno | 1 __i915_gem_object_unset_pages() called for obj1, don't flush. ... __i915_gem_object_unset_pages() called for obj2, TLB flush happened seqno += 2 seqno=6 So, basically the seqno is used to track when the object data stopped being updated, because of an unbind/evict event, being later used by intel_gt_invalidate_tlb() when called from __i915_gem_object_unset_pages(), in order to check if a previous invalidation call was enough to invalidate the object, or if a new call is needed. Now, if seqno is stored at bind, data can still leak, as the assumption made by intel_gt_invalidate_tlb() that the data stopped being used at seqno is not true anymore. Still, I agree that this logic is complex and should be better documented. So, if you're now OK with this patch, I'll add the above explanation inside a kernel-doc comment. Regards, Mauro
Re: [Intel-gfx] [PATCH v2 06/21] drm/i915/gt: Batch TLB invalidations
On 27/07/2022 12:48, Mauro Carvalho Chehab wrote: On Wed, 20 Jul 2022 11:49:59 +0100 Tvrtko Ursulin wrote: On 20/07/2022 08:13, Mauro Carvalho Chehab wrote: On Mon, 18 Jul 2022 14:52:05 +0100 Tvrtko Ursulin wrote: On 14/07/2022 13:06, Mauro Carvalho Chehab wrote: From: Chris Wilson Invalidate TLB in patch, in order to reduce performance regressions. "in batches"? Yeah. Will fix it. +void vma_invalidate_tlb(struct i915_address_space *vm, u32 tlb) +{ + /* +* Before we release the pages that were bound by this vma, we +* must invalidate all the TLBs that may still have a reference +* back to our physical address. It only needs to be done once, +* so after updating the PTE to point away from the pages, record +* the most recent TLB invalidation seqno, and if we have not yet +* flushed the TLBs upon release, perform a full invalidation. +*/ + WRITE_ONCE(tlb, intel_gt_next_invalidate_tlb_full(vm->gt)); Shouldn't tlb be a pointer for this to make sense? Oh, my mistake! Will fix at the next version. diff --git a/drivers/gpu/drm/i915/gt/intel_ppgtt.c b/drivers/gpu/drm/i915/gt/intel_ppgtt.c index d8b94d638559..2da6c82a8bd2 100644 --- a/drivers/gpu/drm/i915/gt/intel_ppgtt.c +++ b/drivers/gpu/drm/i915/gt/intel_ppgtt.c @@ -206,8 +206,12 @@ void ppgtt_bind_vma(struct i915_address_space *vm, void ppgtt_unbind_vma(struct i915_address_space *vm, struct i915_vma_resource *vma_res) { - if (vma_res->allocated) - vm->clear_range(vm, vma_res->start, vma_res->vma_size); + if (!vma_res->allocated) + return; + + vm->clear_range(vm, vma_res->start, vma_res->vma_size); + if (vma_res->tlb) + vma_invalidate_tlb(vm, *vma_res->tlb); The patch is about more than batching? If there is a security hole in this area (unbind) with the current code? No, I don't think there's a security hole. The rationale for this is not due to it. In this case obvious question is why are these changes in the patch which declares itself to be about batching invalidations? Because... Because vma_invalidate_tlb() basically stores a TLB seqno, but the actual invalidation is deferred to when the pages are unset, at __i915_gem_object_unset_pages(). So, what happens is: - on VMA sync mode, the need to invalidate TLB is marked at __vma_put_pages(), before VMA unbind; - on async, this is deferred to happen at ppgtt_unbind_vma(), where it marks the need to invalidate TLBs. On both cases, __i915_gem_object_unset_pages() is called later, when the driver is ready to unmap the page. Sorry still not clear to me why is the patch moving marking of the need to invalidate (regardless if it a bit like today, or a seqno like in this patch) from bind to unbind? What if the seqno was stored in i915_vma_bind, where the bit is set today, and all the hunks which touch the unbind and evict would disappear from the patch. What wouldn't work in that case, if anything? Regards, Tvrtko I am explaining why it looks to me that the patch is doing two things. Implementing batching _and_ adding invalidation points at VMA unbind sites, while so far we had it at backing store release only. Maybe I am wrong and perhaps I am too slow to pick up on the explanation here. So if the patch is doing two things please split it up. I am further confused by the invalidation call site in evict and in unbind - why there can't be one logical site since the logical sequence is evict -> unbind. The invalidation happens only on one place: __i915_gem_object_unset_pages(). Despite its name, vma_invalidate_tlb() just marks the need of doing TLB invalidation. Regards, Mauro
Re: [Intel-gfx] [PATCH v2 06/21] drm/i915/gt: Batch TLB invalidations
On Wed, 20 Jul 2022 11:49:59 +0100 Tvrtko Ursulin wrote: > On 20/07/2022 08:13, Mauro Carvalho Chehab wrote: > > On Mon, 18 Jul 2022 14:52:05 +0100 > > Tvrtko Ursulin wrote: > > > >> > >> On 14/07/2022 13:06, Mauro Carvalho Chehab wrote: > >>> From: Chris Wilson > >>> > >>> Invalidate TLB in patch, in order to reduce performance regressions. > >> > >> "in batches"? > > > > Yeah. Will fix it. > > +void vma_invalidate_tlb(struct i915_address_space *vm, u32 tlb) > > +{ > > + /* > > +* Before we release the pages that were bound by this vma, we > > +* must invalidate all the TLBs that may still have a reference > > +* back to our physical address. It only needs to be done once, > > +* so after updating the PTE to point away from the pages, record > > +* the most recent TLB invalidation seqno, and if we have not yet > > +* flushed the TLBs upon release, perform a full invalidation. > > +*/ > > + WRITE_ONCE(tlb, intel_gt_next_invalidate_tlb_full(vm->gt)); > > Shouldn't tlb be a pointer for this to make sense? Oh, my mistake! Will fix at the next version. > > > >>> diff --git a/drivers/gpu/drm/i915/gt/intel_ppgtt.c > >>> b/drivers/gpu/drm/i915/gt/intel_ppgtt.c > >>> index d8b94d638559..2da6c82a8bd2 100644 > >>> --- a/drivers/gpu/drm/i915/gt/intel_ppgtt.c > >>> +++ b/drivers/gpu/drm/i915/gt/intel_ppgtt.c > >>> @@ -206,8 +206,12 @@ void ppgtt_bind_vma(struct i915_address_space *vm, > >>>void ppgtt_unbind_vma(struct i915_address_space *vm, > >>> struct i915_vma_resource *vma_res) > >>>{ > >>> - if (vma_res->allocated) > >>> - vm->clear_range(vm, vma_res->start, vma_res->vma_size); > >>> + if (!vma_res->allocated) > >>> + return; > >>> + > >>> + vm->clear_range(vm, vma_res->start, vma_res->vma_size); > >>> + if (vma_res->tlb) > >>> + vma_invalidate_tlb(vm, *vma_res->tlb); > >> > >> The patch is about more than batching? If there is a security hole in > >> this area (unbind) with the current code? > > > > No, I don't think there's a security hole. The rationale for this is > > not due to it. > > In this case obvious question is why are these changes in the patch > which declares itself to be about batching invalidations? Because... Because vma_invalidate_tlb() basically stores a TLB seqno, but the actual invalidation is deferred to when the pages are unset, at __i915_gem_object_unset_pages(). So, what happens is: - on VMA sync mode, the need to invalidate TLB is marked at __vma_put_pages(), before VMA unbind; - on async, this is deferred to happen at ppgtt_unbind_vma(), where it marks the need to invalidate TLBs. On both cases, __i915_gem_object_unset_pages() is called later, when the driver is ready to unmap the page. > I am explaining why it looks to me that the patch is doing two things. > Implementing batching _and_ adding invalidation points at VMA unbind > sites, while so far we had it at backing store release only. Maybe I am > wrong and perhaps I am too slow to pick up on the explanation here. > > So if the patch is doing two things please split it up. > > I am further confused by the invalidation call site in evict and in > unbind - why there can't be one logical site since the logical sequence > is evict -> unbind. The invalidation happens only on one place: __i915_gem_object_unset_pages(). Despite its name, vma_invalidate_tlb() just marks the need of doing TLB invalidation. Regards, Mauro
Re: [Intel-gfx] [PATCH v2 06/21] drm/i915/gt: Batch TLB invalidations
On 14/07/2022 13:06, Mauro Carvalho Chehab wrote: From: Chris Wilson Invalidate TLB in patch, in order to reduce performance regressions. Currently, every caller performs a full barrier around a TLB invalidation, ignoring all other invalidations that may have already removed their PTEs from the cache. As this is a synchronous operation and can be quite slow, we cause multiple threads to contend on the TLB invalidate mutex blocking userspace. We only need to invalidate the TLB once after replacing our PTE to ensure that there is no possible continued access to the physical address before releasing our pages. By tracking a seqno for each full TLB invalidate we can quickly determine if one has been performed since rewriting the PTE, and only if necessary trigger one for ourselves. That helps to reduce the performance regression introduced by TLB invalidate logic. [mchehab: rebased to not require moving the code to a separate file] Cc: sta...@vger.kernel.org Fixes: 7938d61591d3 ("drm/i915: Flush TLBs before releasing backing store") Suggested-by: Tvrtko Ursulin Signed-off-by: Chris Wilson Cc: Fei Yang Signed-off-by: Mauro Carvalho Chehab --- To avoid mailbombing on a large number of people, only mailing lists were C/C on the cover. See [PATCH v2 00/21] at: https://lore.kernel.org/all/cover.1657800199.git.mche...@kernel.org/ .../gpu/drm/i915/gem/i915_gem_object_types.h | 3 +- drivers/gpu/drm/i915/gem/i915_gem_pages.c | 21 +--- drivers/gpu/drm/i915/gt/intel_gt.c| 53 ++- drivers/gpu/drm/i915/gt/intel_gt.h| 12 - drivers/gpu/drm/i915/gt/intel_gt_types.h | 18 ++- drivers/gpu/drm/i915/gt/intel_ppgtt.c | 8 ++- drivers/gpu/drm/i915/i915_vma.c | 34 +--- drivers/gpu/drm/i915/i915_vma.h | 1 + drivers/gpu/drm/i915/i915_vma_resource.c | 5 +- drivers/gpu/drm/i915/i915_vma_resource.h | 6 ++- 10 files changed, 125 insertions(+), 36 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index 5cf36a130061..9f6b14ec189a 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -335,7 +335,6 @@ struct drm_i915_gem_object { #define I915_BO_READONLY BIT(7) #define I915_TILING_QUIRK_BIT 8 /* unknown swizzling; do not release! */ #define I915_BO_PROTECTED BIT(9) -#define I915_BO_WAS_BOUND_BIT 10 /** * @mem_flags - Mutable placement-related flags * @@ -616,6 +615,8 @@ struct drm_i915_gem_object { * pages were last acquired. */ bool dirty:1; + + u32 tlb; } mm; struct { diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c index 6835279943df..8357dbdcab5c 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c @@ -191,6 +191,18 @@ static void unmap_object(struct drm_i915_gem_object *obj, void *ptr) vunmap(ptr); } +static void flush_tlb_invalidate(struct drm_i915_gem_object *obj) +{ + struct drm_i915_private *i915 = to_i915(obj->base.dev); + struct intel_gt *gt = to_gt(i915); + + if (!obj->mm.tlb) + return; + + intel_gt_invalidate_tlb(gt, obj->mm.tlb); + obj->mm.tlb = 0; +} + struct sg_table * __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj) { @@ -216,14 +228,7 @@ __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj) __i915_gem_object_reset_page_iter(obj); obj->mm.page_sizes.phys = obj->mm.page_sizes.sg = 0; - if (test_and_clear_bit(I915_BO_WAS_BOUND_BIT, >flags)) { - struct drm_i915_private *i915 = to_i915(obj->base.dev); - struct intel_gt *gt = to_gt(i915); - intel_wakeref_t wakeref; - - with_intel_gt_pm_if_awake(gt, wakeref) - intel_gt_invalidate_tlbs(gt); - } + flush_tlb_invalidate(obj); return pages; } diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c b/drivers/gpu/drm/i915/gt/intel_gt.c index 5c55a90672f4..f435e06125aa 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt.c +++ b/drivers/gpu/drm/i915/gt/intel_gt.c @@ -38,8 +38,6 @@ static void __intel_gt_init_early(struct intel_gt *gt) { spin_lock_init(>irq_lock); - mutex_init(>tlb_invalidate_lock); - INIT_LIST_HEAD(>closed_vma); spin_lock_init(>closed_lock); @@ -50,6 +48,8 @@ static void __intel_gt_init_early(struct intel_gt *gt) intel_gt_init_reset(gt); intel_gt_init_requests(gt); intel_gt_init_timelines(gt); + mutex_init(>tlb.invalidate_lock); + seqcount_mutex_init(>tlb.seqno, >tlb.invalidate_lock); intel_gt_pm_init_early(gt); intel_uc_init_early(>uc); @@ -770,6 +770,7 @@
Re: [Intel-gfx] [PATCH v2 06/21] drm/i915/gt: Batch TLB invalidations
On 20/07/2022 08:13, Mauro Carvalho Chehab wrote: On Mon, 18 Jul 2022 14:52:05 +0100 Tvrtko Ursulin wrote: On 14/07/2022 13:06, Mauro Carvalho Chehab wrote: From: Chris Wilson Invalidate TLB in patch, in order to reduce performance regressions. "in batches"? Yeah. Will fix it. diff --git a/drivers/gpu/drm/i915/gt/intel_ppgtt.c b/drivers/gpu/drm/i915/gt/intel_ppgtt.c index d8b94d638559..2da6c82a8bd2 100644 --- a/drivers/gpu/drm/i915/gt/intel_ppgtt.c +++ b/drivers/gpu/drm/i915/gt/intel_ppgtt.c @@ -206,8 +206,12 @@ void ppgtt_bind_vma(struct i915_address_space *vm, void ppgtt_unbind_vma(struct i915_address_space *vm, struct i915_vma_resource *vma_res) { - if (vma_res->allocated) - vm->clear_range(vm, vma_res->start, vma_res->vma_size); + if (!vma_res->allocated) + return; + + vm->clear_range(vm, vma_res->start, vma_res->vma_size); + if (vma_res->tlb) + vma_invalidate_tlb(vm, *vma_res->tlb); The patch is about more than batching? If there is a security hole in this area (unbind) with the current code? No, I don't think there's a security hole. The rationale for this is not due to it. In this case obvious question is why are these changes in the patch which declares itself to be about batching invalidations? Because... Since commit 2f6b90da9192 ("drm/i915: Use vma resources for async unbinding"), VMA unbind can happen either sync or async. So, the logic needs to do TLB invalidate on two places. After this patch, the code at __i915_vma_evict is: struct dma_fence *__i915_vma_evict(struct i915_vma *vma, bool async) { ... if (async) unbind_fence = i915_vma_resource_unbind(vma_res, >obj->mm.tlb); else unbind_fence = i915_vma_resource_unbind(vma_res, NULL); vma->resource = NULL; atomic_and(~(I915_VMA_BIND_MASK | I915_VMA_ERROR | I915_VMA_GGTT_WRITE), >flags); i915_vma_detach(vma); if (!async) { if (unbind_fence) { dma_fence_wait(unbind_fence, false); dma_fence_put(unbind_fence); unbind_fence = NULL; } vma_invalidate_tlb(vma->vm, vma->obj->mm.tlb); } ... So, basically, if !async, __i915_vma_evict() will do TLB cache invalidation. However, when async is used, the actual page release will happen later, at this function: void ppgtt_unbind_vma(struct i915_address_space *vm, struct i915_vma_resource *vma_res) { if (!vma_res->allocated) return; vm->clear_range(vm, vma_res->start, vma_res->vma_size); if (vma_res->tlb) vma_invalidate_tlb(vm, *vma_res->tlb); } .. frankly I don't follow since I don't see any page release happening in here. Just PTE clearing. I am explaining why it looks to me that the patch is doing two things. Implementing batching _and_ adding invalidation points at VMA unbind sites, while so far we had it at backing store release only. Maybe I am wrong and perhaps I am too slow to pick up on the explanation here. So if the patch is doing two things please split it up. I am further confused by the invalidation call site in evict and in unbind - why there can't be one logical site since the logical sequence is evict -> unbind. Regards, Tvrtko
Re: [Intel-gfx] [PATCH v2 06/21] drm/i915/gt: Batch TLB invalidations
On Mon, 18 Jul 2022 14:52:05 +0100 Tvrtko Ursulin wrote: > > On 14/07/2022 13:06, Mauro Carvalho Chehab wrote: > > From: Chris Wilson > > > > Invalidate TLB in patch, in order to reduce performance regressions. > > "in batches"? Yeah. Will fix it. > > diff --git a/drivers/gpu/drm/i915/gt/intel_ppgtt.c > > b/drivers/gpu/drm/i915/gt/intel_ppgtt.c > > index d8b94d638559..2da6c82a8bd2 100644 > > --- a/drivers/gpu/drm/i915/gt/intel_ppgtt.c > > +++ b/drivers/gpu/drm/i915/gt/intel_ppgtt.c > > @@ -206,8 +206,12 @@ void ppgtt_bind_vma(struct i915_address_space *vm, > > void ppgtt_unbind_vma(struct i915_address_space *vm, > > struct i915_vma_resource *vma_res) > > { > > - if (vma_res->allocated) > > - vm->clear_range(vm, vma_res->start, vma_res->vma_size); > > + if (!vma_res->allocated) > > + return; > > + > > + vm->clear_range(vm, vma_res->start, vma_res->vma_size); > > + if (vma_res->tlb) > > + vma_invalidate_tlb(vm, *vma_res->tlb); > > The patch is about more than batching? If there is a security hole in > this area (unbind) with the current code? No, I don't think there's a security hole. The rationale for this is not due to it. Since commit 2f6b90da9192 ("drm/i915: Use vma resources for async unbinding"), VMA unbind can happen either sync or async. So, the logic needs to do TLB invalidate on two places. After this patch, the code at __i915_vma_evict is: struct dma_fence *__i915_vma_evict(struct i915_vma *vma, bool async) { ... if (async) unbind_fence = i915_vma_resource_unbind(vma_res, >obj->mm.tlb); else unbind_fence = i915_vma_resource_unbind(vma_res, NULL); vma->resource = NULL; atomic_and(~(I915_VMA_BIND_MASK | I915_VMA_ERROR | I915_VMA_GGTT_WRITE), >flags); i915_vma_detach(vma); if (!async) { if (unbind_fence) { dma_fence_wait(unbind_fence, false); dma_fence_put(unbind_fence); unbind_fence = NULL; } vma_invalidate_tlb(vma->vm, vma->obj->mm.tlb); } ... So, basically, if !async, __i915_vma_evict() will do TLB cache invalidation. However, when async is used, the actual page release will happen later, at this function: void ppgtt_unbind_vma(struct i915_address_space *vm, struct i915_vma_resource *vma_res) { if (!vma_res->allocated) return; vm->clear_range(vm, vma_res->start, vma_res->vma_size); if (vma_res->tlb) vma_invalidate_tlb(vm, *vma_res->tlb); } Regards, Mauro
Re: [Intel-gfx] [PATCH v2 06/21] drm/i915/gt: Batch TLB invalidations
On 14/07/2022 13:06, Mauro Carvalho Chehab wrote: From: Chris Wilson Invalidate TLB in patch, in order to reduce performance regressions. "in batches"? Currently, every caller performs a full barrier around a TLB invalidation, ignoring all other invalidations that may have already removed their PTEs from the cache. As this is a synchronous operation and can be quite slow, we cause multiple threads to contend on the TLB invalidate mutex blocking userspace. We only need to invalidate the TLB once after replacing our PTE to ensure that there is no possible continued access to the physical address before releasing our pages. By tracking a seqno for each full TLB invalidate we can quickly determine if one has been performed since rewriting the PTE, and only if necessary trigger one for ourselves. That helps to reduce the performance regression introduced by TLB invalidate logic. [mchehab: rebased to not require moving the code to a separate file] Cc: sta...@vger.kernel.org Fixes: 7938d61591d3 ("drm/i915: Flush TLBs before releasing backing store") Suggested-by: Tvrtko Ursulin Signed-off-by: Chris Wilson Cc: Fei Yang Signed-off-by: Mauro Carvalho Chehab --- To avoid mailbombing on a large number of people, only mailing lists were C/C on the cover. See [PATCH v2 00/21] at: https://lore.kernel.org/all/cover.1657800199.git.mche...@kernel.org/ .../gpu/drm/i915/gem/i915_gem_object_types.h | 3 +- drivers/gpu/drm/i915/gem/i915_gem_pages.c | 21 +--- drivers/gpu/drm/i915/gt/intel_gt.c| 53 ++- drivers/gpu/drm/i915/gt/intel_gt.h| 12 - drivers/gpu/drm/i915/gt/intel_gt_types.h | 18 ++- drivers/gpu/drm/i915/gt/intel_ppgtt.c | 8 ++- drivers/gpu/drm/i915/i915_vma.c | 34 +--- drivers/gpu/drm/i915/i915_vma.h | 1 + drivers/gpu/drm/i915/i915_vma_resource.c | 5 +- drivers/gpu/drm/i915/i915_vma_resource.h | 6 ++- 10 files changed, 125 insertions(+), 36 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h index 5cf36a130061..9f6b14ec189a 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h @@ -335,7 +335,6 @@ struct drm_i915_gem_object { #define I915_BO_READONLY BIT(7) #define I915_TILING_QUIRK_BIT 8 /* unknown swizzling; do not release! */ #define I915_BO_PROTECTED BIT(9) -#define I915_BO_WAS_BOUND_BIT 10 /** * @mem_flags - Mutable placement-related flags * @@ -616,6 +615,8 @@ struct drm_i915_gem_object { * pages were last acquired. */ bool dirty:1; + + u32 tlb; } mm; struct { diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c index 6835279943df..8357dbdcab5c 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c @@ -191,6 +191,18 @@ static void unmap_object(struct drm_i915_gem_object *obj, void *ptr) vunmap(ptr); } +static void flush_tlb_invalidate(struct drm_i915_gem_object *obj) +{ + struct drm_i915_private *i915 = to_i915(obj->base.dev); + struct intel_gt *gt = to_gt(i915); + + if (!obj->mm.tlb) + return; + + intel_gt_invalidate_tlb(gt, obj->mm.tlb); + obj->mm.tlb = 0; +} + struct sg_table * __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj) { @@ -216,14 +228,7 @@ __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj) __i915_gem_object_reset_page_iter(obj); obj->mm.page_sizes.phys = obj->mm.page_sizes.sg = 0; - if (test_and_clear_bit(I915_BO_WAS_BOUND_BIT, >flags)) { - struct drm_i915_private *i915 = to_i915(obj->base.dev); - struct intel_gt *gt = to_gt(i915); - intel_wakeref_t wakeref; - - with_intel_gt_pm_if_awake(gt, wakeref) - intel_gt_invalidate_tlbs(gt); - } + flush_tlb_invalidate(obj); return pages; } diff --git a/drivers/gpu/drm/i915/gt/intel_gt.c b/drivers/gpu/drm/i915/gt/intel_gt.c index 5c55a90672f4..f435e06125aa 100644 --- a/drivers/gpu/drm/i915/gt/intel_gt.c +++ b/drivers/gpu/drm/i915/gt/intel_gt.c @@ -38,8 +38,6 @@ static void __intel_gt_init_early(struct intel_gt *gt) { spin_lock_init(>irq_lock); - mutex_init(>tlb_invalidate_lock); - INIT_LIST_HEAD(>closed_vma); spin_lock_init(>closed_lock); @@ -50,6 +48,8 @@ static void __intel_gt_init_early(struct intel_gt *gt) intel_gt_init_reset(gt); intel_gt_init_requests(gt); intel_gt_init_timelines(gt); + mutex_init(>tlb.invalidate_lock); + seqcount_mutex_init(>tlb.seqno, >tlb.invalidate_lock); intel_gt_pm_init_early(gt); intel_uc_init_early(>uc); @@