On 24/02/2023 10:24, Pekka Paalanen wrote:
On Fri, 24 Feb 2023 09:41:46 +
Tvrtko Ursulin wrote:
On 24/02/2023 09:26, Pekka Paalanen wrote:
On Thu, 23 Feb 2023 10:51:48 -0800
Rob Clark wrote:
On Thu, Feb 23, 2023 at 1:38 AM Pekka Paalanen wrote:
On Wed, 22 Feb 2023 07:37:26
On 18/02/2023 21:15, Rob Clark wrote:
From: Rob Clark
Add a new flag to let userspace provide a deadline as a hint for syncobj
and timeline waits. This gives a hint to the driver signaling the
backing fences about how soon userspace needs it to compete work, so it
can addjust GPU frequency
On 24/02/2023 09:26, Pekka Paalanen wrote:
On Thu, 23 Feb 2023 10:51:48 -0800
Rob Clark wrote:
On Thu, Feb 23, 2023 at 1:38 AM Pekka Paalanen wrote:
On Wed, 22 Feb 2023 07:37:26 -0800
Rob Clark wrote:
On Wed, Feb 22, 2023 at 1:49 AM Pekka Paalanen wrote:
...
On another matter,
On 23/02/2023 18:41, Badal Nilawar wrote:
Apply Wa_14017073508 for MTL SoC die A step instead of graphics step.
To get the SoC die stepping there is no direct interface so using
revid as revid 0 aligns with SoC die A step.
Bspec: 55420
Fixes: 8f70f1ec587d ("drm/i915/mtl: Add Wa_14017073508
On 22/02/2023 17:16, Rob Clark wrote:
On Wed, Feb 22, 2023 at 9:05 AM Tvrtko Ursulin
wrote:
On 22/02/2023 15:28, Christian König wrote:
Am 22.02.23 um 11:23 schrieb Tvrtko Ursulin:
On 18/02/2023 21:15, Rob Clark wrote:
From: Rob Clark
Add a way to hint to the fence signaler
On 22/02/2023 15:28, Christian König wrote:
Am 22.02.23 um 11:23 schrieb Tvrtko Ursulin:
On 18/02/2023 21:15, Rob Clark wrote:
From: Rob Clark
Add a way to hint to the fence signaler of an upcoming deadline, such as
vblank, which the fence waiter would prefer not to miss. This is to aid
On 18/02/2023 21:15, Rob Clark wrote:
From: Rob Clark
Propagate the deadline to all the fences in the chain.
Signed-off-by: Rob Clark
Reviewed-by: Christian König for this one.
---
drivers/dma-buf/dma-fence-chain.c | 13 +
1 file changed, 13 insertions(+)
diff --git
On 18/02/2023 21:15, Rob Clark wrote:
From: Rob Clark
Add a way to hint to the fence signaler of an upcoming deadline, such as
vblank, which the fence waiter would prefer not to miss. This is to aid
the fence signaler in making power management decisions, like boosting
frequency as the
On 20/02/2023 17:18, Andrea Righi wrote:
It seems that commit bc3c5e0809ae ("drm/i915/sseu: Don't try to store EU
mask internally in UAPI format") exposed a potential out-of-bounds
access, reported by UBSAN as following on a laptop with a gen 11 i915
card:
UBSAN: array-index-out-of-bounds
On 20/02/2023 16:44, Tvrtko Ursulin wrote:
On 20/02/2023 15:52, Rob Clark wrote:
On Mon, Feb 20, 2023 at 3:33 AM Tvrtko Ursulin
wrote:
On 17/02/2023 20:45, Rodrigo Vivi wrote:
[snip]
Yeah I agree. And as not all media use cases are the same, as are not
all compute contexts someone
On 20/02/2023 15:52, Rob Clark wrote:
On Mon, Feb 20, 2023 at 3:33 AM Tvrtko Ursulin
wrote:
On 17/02/2023 20:45, Rodrigo Vivi wrote:
[snip]
Yeah I agree. And as not all media use cases are the same, as are not
all compute contexts someone somewhere will need to run a series
On 20/02/2023 15:45, Rob Clark wrote:
On Mon, Feb 20, 2023 at 4:22 AM Tvrtko Ursulin
wrote:
On 17/02/2023 17:00, Rob Clark wrote:
On Fri, Feb 17, 2023 at 8:03 AM Tvrtko Ursulin
wrote:
[snip]
adapted from your patches.. I think the basic idea of deadlines
(which includes "I
On 18/02/2023 21:15, Rob Clark wrote:
From: Rob Clark
Signed-off-by: Rob Clark
---
This should probably be re-written by someone who knows the i915
request/timeline stuff better, to deal with non-immediate deadlines.
But as-is I think this should be enough to handle the case where
we want
On 18/02/2023 19:54, Rob Clark wrote:
On Thu, Feb 16, 2023 at 3:00 AM Tvrtko Ursulin
wrote:
From: Tvrtko Ursulin
Track how many callers are explicity waiting on a fence to signal and
allow querying that via new dma_fence_wait_count() API.
This provides infrastructure on top of which
On 18/02/2023 19:56, Rob Clark wrote:
On Thu, Feb 16, 2023 at 2:59 AM Tvrtko Ursulin
wrote:
From: Tvrtko Ursulin
Use the previously added dma-fence tracking of explicit waiters.
Signed-off-by: Tvrtko Ursulin
---
drivers/gpu/drm/drm_syncobj.c | 6 +++---
1 file changed, 3 insertions
On 17/02/2023 17:00, Rob Clark wrote:
On Fri, Feb 17, 2023 at 8:03 AM Tvrtko Ursulin
wrote:
[snip]
adapted from your patches.. I think the basic idea of deadlines
(which includes "I want it NOW" ;-)) isn't controversial, but the
original idea got caught up in some bikeshed (
On 17/02/2023 20:45, Rodrigo Vivi wrote:
On Fri, Feb 17, 2023 at 09:00:49AM -0800, Rob Clark wrote:
On Fri, Feb 17, 2023 at 8:03 AM Tvrtko Ursulin
wrote:
On 17/02/2023 14:55, Rob Clark wrote:
On Fri, Feb 17, 2023 at 4:56 AM Tvrtko Ursulin
wrote:
On 16/02/2023 18:19, Rodrigo Vivi
On 20/02/2023 10:01, Christian König wrote:
Am 20.02.23 um 10:55 schrieb Tvrtko Ursulin:
Hi,
On 14/02/2023 13:59, Christian König wrote:
Am 14.02.23 um 13:50 schrieb Tvrtko Ursulin:
From: Tvrtko Ursulin
Currently drm_gem_handle_create_tail exposes the handle to userspace
before
Hi,
On 14/02/2023 13:59, Christian König wrote:
Am 14.02.23 um 13:50 schrieb Tvrtko Ursulin:
From: Tvrtko Ursulin
Currently drm_gem_handle_create_tail exposes the handle to userspace
before the buffer object constructions is complete. This allowing
of working against a partially
: Don't use BAR mappings for ring buffers with LLC
drivers/gpu/drm/i915/gt/intel_ring.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
It is doing what it laid out as the problem statement so series looks
good to me.
Acked-by: Tvrtko Ursulin
Regards,
Tvrtko
On 17/02/2023 14:55, Rob Clark wrote:
On Fri, Feb 17, 2023 at 4:56 AM Tvrtko Ursulin
wrote:
On 16/02/2023 18:19, Rodrigo Vivi wrote:
On Tue, Feb 14, 2023 at 11:14:00AM -0800, Rob Clark wrote:
On Fri, Feb 10, 2023 at 5:07 AM Tvrtko Ursulin
wrote:
From: Tvrtko Ursulin
In i915 we have
On 16/02/2023 18:19, Rodrigo Vivi wrote:
On Tue, Feb 14, 2023 at 11:14:00AM -0800, Rob Clark wrote:
On Fri, Feb 10, 2023 at 5:07 AM Tvrtko Ursulin
wrote:
From: Tvrtko Ursulin
In i915 we have this concept of "wait boosting" where we give a priority boost
for instance to fe
On 16/02/2023 15:41, Matt Roper wrote:
On Thu, Feb 16, 2023 at 09:21:23AM +, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
As the logic for selecting the register and corresponsing values grew, the
code become a bit unsightly. Consolidate by storing the required values at
engine init time
On 14/02/2023 19:14, Rob Clark wrote:
On Fri, Feb 10, 2023 at 5:07 AM Tvrtko Ursulin
wrote:
From: Tvrtko Ursulin
In i915 we have this concept of "wait boosting" where we give a priority boost
for instance to fences which are actively waited upon from userspace. This has
it's pro
From: Tvrtko Ursulin
Userspace waits coming via the drm_syncobj route have so far been
bypassing the waitboost mechanism.
Use the previously added dma-fence wait tracking API and apply the
same waitboosting logic which applies to other entry points.
This should fix the perfomance regressions
From: Tvrtko Ursulin
Use the previously added dma-fence tracking of explicit waiters.
Signed-off-by: Tvrtko Ursulin
---
drivers/gpu/drm/drm_syncobj.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
index
From: Tvrtko Ursulin
Use the newly added dma-fence API to apply waitboost not only requests
which have been marked with I915_WAIT_PRIORITY by i915, but which may be
waited upon by others (such as for instance buffer sharing in multi-GPU
scenarios).
Signed-off-by: Tvrtko Ursulin
---
drivers
From: Tvrtko Ursulin
Use the previously added dma-fence API to mark the direct i915 waits as
explicit. This has no significant effect apart from following the new
pattern.
Signed-off-by: Tvrtko Ursulin
---
drivers/gpu/drm/i915/i915_request.c | 3 ++-
1 file changed, 2 insertions(+), 1
From: Tvrtko Ursulin
Track how many callers are explicity waiting on a fence to signal and
allow querying that via new dma_fence_wait_count() API.
This provides infrastructure on top of which generic "waitboost" concepts
can be implemented by individual drivers. Wait-boosting is an
From: Tvrtko Ursulin
In preparation of adding a new field to struct dma_fence_cb we will need
an initialization helper for those callers who add callbacks by open-
coding. That will ensure they initialize all the fields so common code
does not get confused by potential garbage in some fields
From: Tvrtko Ursulin
Use the previously added initialization helper to ensure correct operation
of the common code.
Signed-off-by: Tvrtko Ursulin
Cc: Zack Rusin
---
drivers/gpu/drm/vmwgfx/vmwgfx_fence.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm
From: Tvrtko Ursulin
Unhide some i915 helpers which are used for splitting the signalled
check vs notification stages during en masse fence processing.
Signed-off-by: Tvrtko Ursulin
---
drivers/dma-buf/dma-fence.c | 35 +++--
drivers/gpu/drm/i915/gt
From: Tvrtko Ursulin
Use the previously added initialization helper to ensure correct operation
of the common code.
Signed-off-by: Tvrtko Ursulin
---
drivers/gpu/drm/i915/i915_active.c | 2 +-
drivers/gpu/drm/i915/i915_active.h | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff
From: Tvrtko Ursulin
In i915 we have this concept of "wait boosting" where we give a priority boost
for instance to fences which are actively waited upon from userspace. This has
it's pros and cons and can certainly be discussed at lenght. However fact is
some workloads really like it
From: Tvrtko Ursulin
As the logic for selecting the register and corresponsing values grew, the
code become a bit unsightly. Consolidate by storing the required values at
engine init time in the engine itself, and by doing so minimise the amount
of invariant platform and engine checks during
On 15/02/2023 01:56, Ceraolo Spurio, Daniele wrote:
On 2/14/2023 3:48 PM, john.c.harri...@intel.com wrote:
From: John Harrison
Direction from hardware is that stolen memory should never be used for
ring buffer allocations. There are too many caching pitfalls due to the
way stolen memory
On 02/02/2023 09:39, Andrzej Hajda wrote:
On 02.02.2023 09:33, Tvrtko Ursulin wrote:
On 02/02/2023 07:43, Andrzej Hajda wrote:
On 01.02.2023 17:51, Tvrtko Ursulin wrote:
[snip]
Btw - do you have any idea why the test is suppressed already?! CI
told me BAT was a success...
Except
From: Tvrtko Ursulin
Currently drm_gem_handle_create_tail exposes the handle to userspace
before the buffer object constructions is complete. This allowing
of working against a partially constructed object, which may also be in
the process of having its creation fail, can have a range
From: Tvrtko Ursulin
Use the newly added dma-fence API to apply waitboost not only requests
which have been marked with I915_WAIT_PRIORITY by i915, but which may be
waited upon by others (such as for instance buffer sharing in multi-GPU
scenarios).
Signed-off-by: Tvrtko Ursulin
---
drivers
From: Tvrtko Ursulin
Use the previously added dma-fence API to mark the direct i915 waits as
explicit. This has no significant effect apart from following the new
pattern.
Signed-off-by: Tvrtko Ursulin
---
drivers/gpu/drm/i915/i915_request.c | 3 ++-
1 file changed, 2 insertions(+), 1
From: Tvrtko Ursulin
Use the previously added dma-fence tracking of explicit waiters.
Signed-off-by: Tvrtko Ursulin
---
drivers/gpu/drm/drm_syncobj.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
index
From: Tvrtko Ursulin
Track how many callers are explicity waiting on a fence to signal and
allow querying that via new dma_fence_wait_count() API.
This provides infrastructure on top of which generic "waitboost" concepts
can be implemented by individual drivers. Wait-boosting is an
From: Tvrtko Ursulin
Userspace waits coming via the drm_syncobj route have so far been
bypassing the waitboost mechanism.
Use the previously added dma-fence wait tracking API and apply the
same waitboosting logic which applies to other entry points.
This should fix the perfomance regressions
From: Tvrtko Ursulin
In i915 we have this concept of "wait boosting" where we give a priority boost
for instance to fences which are actively waited upon from userspace. This has
it's pros and cons and can certainly be discussed at lenght. However fact is
some workloads really like it
From: Tvrtko Ursulin
Userspace waits coming via the drm_syncobj route have so far been
bypassing the waitboost mechanism.
Use the previously added dma-fence wait tracking API and apply the
same waitboosting logic which applies to other entry points.
This should fix the perfomance regressions
From: Tvrtko Ursulin
Use the previously added dma-fence tracking of explicit waiters.
Signed-off-by: Tvrtko Ursulin
---
drivers/gpu/drm/drm_syncobj.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
index
From: Tvrtko Ursulin
Track how many callers are explicity waiting on a fence to signal and
allow querying that via new dma_fence_wait_count() API.
This provides infrastructure on top of which generic "waitboost" concepts
can be implemented by individual drivers. Wait-boosting is an
From: Tvrtko Ursulin
In i915 we have this concept of "wait boosting" where we give a priority boost
for instance to fences which are actively waited upon from userspace. This has
it's pros and cons and can certainly be discussed at lenght. However fact is
some workloads really like it
ags should have been grouped with the ones below in one block.
I have tidied this while pushing, thanks for the fix and review!
Regards,
Tvrtko
v2: Add fixes tag (Tvrtko, Matt A)
Cc: Matthew Auld
Cc: Tvrtko Ursulin
Reviewed-by: Matthew Auld
Signed-off-by: Aravind Iddamsetty
---
drivers/gp
Hi,
Adding Matt & Thomas as potential candidates to review.
Regards,
Tvrtko
On 03/02/2023 19:30, Deepak R Varma wrote:
The macro definition of gen6_for_all_pdes() expands to a for loop such
that it breaks when the page table is null. Hence there is no need to
again test validity of the
visable to userspace, so
it must be the last thing we do with the fence.
Fixes: 00dae4d3d35d ("drm/i915: Implement SINGLE_TIMELINE with a syncobj (v4)")
This is correct and the fix looks good to me.
Reviewed-by: Tvrtko Ursulin
CI is green so I will merge it, thanks again for
On 03/02/2023 11:57, Aravind Iddamsetty wrote:
Obj flags for shmem objects is not being set correctly.
Cc: Matthew Auld
Signed-off-by: Aravind Iddamsetty
Could even be:
Fixes: 13d29c823738 ("drm/i915/ehl: unconditionally flush the pages on acquire")
Cc: # v5.15+
?
Regards,
Tvrtko
On 02/02/2023 23:43, T.J. Mercier wrote:
On Wed, Feb 1, 2023 at 6:23 AM Tvrtko Ursulin
wrote:
On 01/02/2023 01:49, T.J. Mercier wrote:
On Tue, Jan 31, 2023 at 6:01 AM Tvrtko Ursulin
wrote:
On 25/01/2023 20:04, T.J. Mercier wrote:
On Wed, Jan 25, 2023 at 9:31 AM Tvrtko Ursulin
wrote
On 02/02/2023 23:43, T.J. Mercier wrote:
On Wed, Feb 1, 2023 at 6:52 AM Tvrtko Ursulin
wrote:
On 01/02/2023 14:23, Tvrtko Ursulin wrote:
On 01/02/2023 01:49, T.J. Mercier wrote:
On Tue, Jan 31, 2023 at 6:01 AM Tvrtko Ursulin
wrote:
On 25/01/2023 20:04, T.J. Mercier wrote:
On Wed
On 02/02/2023 17:11, Teres Alexis, Alan Previn wrote:
On Thu, 2023-02-02 at 08:43 +, Tvrtko Ursulin wrote:
On 02/02/2023 08:13, Alan Previn wrote:
MESA driver is creating protected context on every driver handle
initialization to query caps bit for app. So when running CI tests
On 28/01/2023 01:11, Tejun Heo wrote:
On Thu, Jan 12, 2023 at 04:56:07PM +, Tvrtko Ursulin wrote:
...
+ /*
+* 1st pass - reset working values and update hierarchical weights and
+* GPU utilisation.
+*/
+ if (!__start_scanning(root, period_us
On 02/02/2023 08:13, Alan Previn wrote:
MESA driver is creating protected context on every driver handle
initialization to query caps bit for app. So when running CI tests,
they are observing hundreds of drm_errors when enabling PXP
in .config but using SOC or BIOS configuration that cannot
On 02/02/2023 07:43, Andrzej Hajda wrote:
On 01.02.2023 17:51, Tvrtko Ursulin wrote:
[snip]
+static int intel_engine_init_tlb_invalidation(struct intel_engine_cs
*engine)
+{
+ static const union intel_engine_tlb_inv_reg gen8_regs[] = {
+ [RENDER_CLASS].reg = GEN8_RTCR
From: Tvrtko Ursulin
As the logic for selecting the register and corresponsing values grew, the
code become a bit unsightly. Consolidate by storing the required values at
engine init time in the engine itself, and by doing so minimise the amount
of invariant platform and engine checks during
From: Tvrtko Ursulin
As the logic for selecting the register and corresponsing values grew, the
code become a bit unsightly. Consolidate by storing the required values at
engine init time in the engine itself, and by doing so minimise the amount
of invariant platform and engine checks during
On 01/02/2023 14:23, Tvrtko Ursulin wrote:
On 01/02/2023 01:49, T.J. Mercier wrote:
On Tue, Jan 31, 2023 at 6:01 AM Tvrtko Ursulin
wrote:
On 25/01/2023 20:04, T.J. Mercier wrote:
On Wed, Jan 25, 2023 at 9:31 AM Tvrtko Ursulin
wrote:
Hi,
On 25/01/2023 11:52, Michal Hocko wrote
On 01/02/2023 01:49, T.J. Mercier wrote:
On Tue, Jan 31, 2023 at 6:01 AM Tvrtko Ursulin
wrote:
On 25/01/2023 20:04, T.J. Mercier wrote:
On Wed, Jan 25, 2023 at 9:31 AM Tvrtko Ursulin
wrote:
Hi,
On 25/01/2023 11:52, Michal Hocko wrote:
On Tue 24-01-23 19:46:28, Shakeel Butt wrote
where to implement register workarounds (Gustavo Sousa)
- Use uabi engines for the default engine map (Tvrtko Ursulin)
- Flush all tiles on test exit (Tvrtko Ursulin)
- Annotate a couple more workaround registers as MCR (Matt Roper)
Driver refactors:
- Add and use GuC oriented print macros (Michal
On 25/01/2023 20:04, T.J. Mercier wrote:
On Wed, Jan 25, 2023 at 9:31 AM Tvrtko Ursulin
wrote:
Hi,
On 25/01/2023 11:52, Michal Hocko wrote:
On Tue 24-01-23 19:46:28, Shakeel Butt wrote:
On Tue, Jan 24, 2023 at 03:59:58PM +0100, Michal Hocko wrote:
On Mon 23-01-23 19:17:23, T.J. Mercier
s a "BKL" (struct_mutex) aroung the call to
i915_gem_object_set_tiling. Otherwise fix looks good:
Reviewed-by: Tvrtko Ursulin
I'll tweak the fixes tag and merge in a minute, thanks for the fix!
Regards,
Tvrtko
Signed-off-by: Rob Clark
---
drivers/gpu/drm/i915/gem/i915_gem_tiling
On 27/01/2023 14:11, Michal Koutný wrote:
On Fri, Jan 27, 2023 at 01:31:54PM +, Tvrtko Ursulin
wrote:
I think you missed the finish_suspend_scanning() part:
if (root_drmcs.suspended_period_us)
cancel_delayed_work_sync(_drmcs.scan_work);
So if scanning
On 27/01/2023 13:10, Matthew Auld wrote:
On Mon, 23 Jan 2023 at 16:57, Tvrtko Ursulin
wrote:
+ some more people based on e1a7ab4fca0c
On 19/01/2023 17:32, Rob Clark wrote:
From: Rob Clark
Adding the vm to the vm_xa table makes it visible to userspace, which
could try to race with us
On 27/01/2023 13:01, Michal Koutný wrote:
On Thu, Jan 12, 2023 at 04:56:07PM +, Tvrtko Ursulin
wrote:
+static int drmcs_can_attach(struct cgroup_taskset *tset)
+{
+ int ret;
+
+ /*
+* As processes are getting moved between groups we need to ensure
+* both
On 27/01/2023 10:04, Michal Koutný wrote:
On Thu, Jan 26, 2023 at 05:57:24PM +, Tvrtko Ursulin
wrote:
So even if the RFC shows just a simple i915 implementation, the controller
itself shouldn't prevent a smarter approach (via exposed ABI).
scan/query + over budget notification is IMO
On 26/01/2023 17:57, Tvrtko Ursulin wrote:
On 26/01/2023 17:04, Tejun Heo wrote:
driver folks think about the current RFC tho. Is at least AMD on board
with
the approach?
Yes I am keenly awaiting comments from the DRM colleagues as well.
Forgot to mention one thing on this point which
Hi,
(Two replies in one, hope you will manage to navigate it.)
On 26/01/2023 17:04, Tejun Heo wrote:
Hello,
On Thu, Jan 26, 2023 at 02:00:50PM +0100, Michal Koutný wrote:
On Wed, Jan 25, 2023 at 06:11:35PM +, Tvrtko Ursulin
wrote:
I don't immediately see how you envisage the half
On 25/01/2023 19:04, Matt Roper wrote:
On Wed, Jan 25, 2023 at 10:51:53AM +, Tvrtko Ursulin wrote:
On 24/01/2023 20:54, john.c.harri...@intel.com wrote:
From: John Harrison
Uncore is really part of the GT. So use the GT specific debug/error
That's not really true; uncore should
i Nikula
Cc: Joonas Lahtinen
Cc: Rodrigo Vivi
Cc: Tvrtko Ursulin
Cc: Daniele Ceraolo Spurio
Cc: Andrzej Hajda
Cc: Matthew Auld
Cc: Matt Roper
Cc: Umesh Nerlige Ramappa
Cc: Michael Cheng
Cc: Lucas De Marchi
Cc: Tejas Upadhyay
Cc: Andy Shevchenko
Cc: Aravind Iddamsetty
Cc: Alan Previn
Cc: Br
On 25/01/2023 18:00, John Harrison wrote:
On 1/24/2023 06:40, Tvrtko Ursulin wrote:
On 20/01/2023 23:28, john.c.harri...@intel.com wrote:
From: John Harrison
The debugfs dump of requests was confused about what state requires
the execlist lock versus the GuC lock. There was also a bunch
Hi,
On 23/01/2023 15:42, Michal Koutný wrote:
Hello Tvrtko.
Interesting work.
Thanks!
On Thu, Jan 12, 2023 at 04:55:57PM +, Tvrtko Ursulin
wrote:
Because of the heterogenous hardware and driver DRM capabilities, soft limits
are implemented as a loose co-operative (bi-directional
Hi,
On 25/01/2023 11:52, Michal Hocko wrote:
On Tue 24-01-23 19:46:28, Shakeel Butt wrote:
On Tue, Jan 24, 2023 at 03:59:58PM +0100, Michal Hocko wrote:
On Mon 23-01-23 19:17:23, T.J. Mercier wrote:
When a buffer is exported to userspace, use memcg to attribute the
buffer to the allocating
for messages which strictly are not GT related, than
not to have the origin at all (intel_de_... helpers, I *think*).
Reviewed-by: Tvrtko Ursulin
I'll just add Jani and Matt in case they have a different opinion.
Regards,
Tvrtko
Signed-off-by: John Harrison
---
drivers/gpu/drm/i915
On 20/01/2023 23:28, john.c.harri...@intel.com wrote:
From: John Harrison
The debugfs dump of requests was confused about what state requires
the execlist lock versus the GuC lock. There was also a bunch of
duplicated messy code between it and the error capture code.
So refactor the hung
ure error state on context reset")
Cc: Matthew Brost
Cc: John Harrison
Cc: Jani Nikula
Cc: Joonas Lahtinen
Cc: Rodrigo Vivi
Cc: Tvrtko Ursulin
Cc: Daniele Ceraolo Spurio
Cc: Andrzej Hajda
Cc: Matthew Auld
Cc: Matt Roper
Cc: Umesh Nerlige Ramappa
Cc: Michael Cheng
Cc: Lucas De Marchi
+ some more people based on e1a7ab4fca0c
On 19/01/2023 17:32, Rob Clark wrote:
From: Rob Clark
Adding the vm to the vm_xa table makes it visible to userspace, which
could try to race with us to close the vm. So we need to take our extra
reference before putting it in the table.
Hi,
Chris was kind enough to bring my attention to this thread. Indeed this
information was asked for by various people for many years so it sounds
very useful to actually do attempt it.
On 04/01/2023 13:03, Boris Brezillon wrote:
drm-memory-all: memory hold by this context. Not that all
then look safe to:
Acked-by: Tvrtko Ursulin
Regards,
Tvrtko
gt;gt->i915->drm, "Got hung context on %s
with active request %lld:%lld [0x%04X] not yet started\n",
+engine->name, rq->fence.context, rq->fence.seqno,
ce->guc_id.id);
} else {
/*
* Getting here with
- i915_request_put(rq);
+ if (capture) {
+ intel_engine_coredump_add_vma(ee, capture, compress);
+ } else {
+ kfree(ee);
+ ee = NULL;
+ }
return ee;
-
-no_request_capture:
- kfree(ee);
- return NULL;
}
static void
LGTM - regardless of how i915_request_get_rcu flow ends up:
Acked-by: Tvrtko Ursulin
Regards,
Tvrtko
ure error state on context reset")
Cc: Matthew Brost
Cc: John Harrison
Cc: Jani Nikula
Cc: Joonas Lahtinen
Cc: Rodrigo Vivi
Cc: Tvrtko Ursulin
Cc: Daniele Ceraolo Spurio
Cc: Andrzej Hajda
Cc: Chris Wilson
Cc: Matthew Auld
Cc: Matt Roper
Cc: Umesh Nerlige Ramappa
Cc: Michael Cheng
Cc:
On 18/01/2023 13:19, Tvrtko Ursulin wrote:
Hi Dave & Daniel,
On 17/01/2023 17:52, Nirmoy Das wrote:
Currently there is no easy way for a drm driver to safely check and allow
drm_vma_offset_node for a drm file just once. Allow drm drivers to call
non-refcounted version of drm_vma_node_a
rack of each drm_vma_node_allow() to call subsequent
drm_vma_node_revoke() to prevent memory leak.
Cc: Maarten Lankhorst
Cc: Maxime Ripard
Cc: Thomas Zimmermann
Cc: David Airlie
Cc: Daniel Vetter
Cc: Tvrtko Ursulin
Cc: Andi Shyti
Okay to take this via drm-intel?
Do we need an additional
-gt-next-2023-01-18:
Driver Changes:
Fixes/improvements/new stuff:
- Fix workarounds on Gen2-3 (Tvrtko Ursulin)
- Fix HuC delayed load memory leaks (Daniele Ceraolo Spurio)
- Fix a BUG caused by impendance mismatch in dma_fence_wait_timeout and GuC
(Janusz Krzysztofik)
- Add DG2 workarounds
drm_vma_node_allow() to call subsequent
drm_vma_node_revoke() to prevent memory leak.
Cc: Maarten Lankhorst
Cc: Maxime Ripard
Cc: Thomas Zimmermann
Cc: David Airlie
Cc: Daniel Vetter
Cc: Tvrtko Ursulin
Cc: Andi Shyti
Suggested-by: Chris Wilson
Signed-off-by: Nirmoy Das
---
drivers/gpu/drm
drm_vma_node_revoke once per-file
on each mmap_offset. As the mmap_offset is reused by the client, the
per-file vm_count may remain non-zero and the rbtree leaked.
Call drm_vma_node_allow_once() instead to prevent that memory leak.
Cc: Tvrtko Ursulin
Cc: Andi Shyti
Fixes: 786555987207 (&quo
On 17/01/2023 16:03, Stanislaw Gruszka wrote:
Hi
On Thu, Jan 12, 2023 at 04:56:01PM +, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin
To enable propagation of settings from the cgroup drm controller to drm we
need to start tracking which processes own which drm clients.
Implement
On 12/01/2023 21:42, Lucas De Marchi wrote:
On Thu, Jan 05, 2023 at 01:35:52PM +, Tvrtko Ursulin wrote:
Okay to sum it up below with some final notes..
On 04/01/2023 19:34, Matt Roper wrote:
On Wed, Jan 04, 2023 at 09:58:13AM +, Tvrtko Ursulin wrote:
On 23/12/2022 18:28, Lucas De
On 14/01/2023 01:27, John Harrison wrote:
On 1/13/2023 01:22, Tvrtko Ursulin wrote:
On 12/01/2023 20:59, John Harrison wrote:
On 1/12/2023 02:15, Tvrtko Ursulin wrote:
On 12/01/2023 02:53, john.c.harri...@intel.com wrote:
From: John Harrison
Engine resets are supposed to never fail
On 13/01/2023 21:29, John Harrison wrote:
On 1/13/2023 09:46, Hellstrom, Thomas wrote:
On Fri, 2023-01-13 at 09:51 +, Tvrtko Ursulin wrote:
On 12/01/2023 20:40, John Harrison wrote:
On 1/12/2023 02:01, Tvrtko Ursulin wrote:
On 12/01/2023 02:53, john.c.harri...@intel.com wrote:
[snip
On 13/01/2023 17:46, Hellstrom, Thomas wrote:
On Fri, 2023-01-13 at 09:51 +, Tvrtko Ursulin wrote:
On 12/01/2023 20:40, John Harrison wrote:
On 1/12/2023 02:01, Tvrtko Ursulin wrote:
On 12/01/2023 02:53, john.c.harri...@intel.com wrote:
From: John Harrison
There was a report
On 13/01/2023 03:15, Dixit, Ashutosh wrote:
On Thu, 12 Jan 2023 18:27:52 -0800, Vinay Belgaumkar wrote:
Reading current root sysfs entries gives a min/max of all
GTs. Updating this so we return default (GT0) values when root
level sysfs entries are accessed, instead of min/max for the card.
On 12/01/2023 20:40, John Harrison wrote:
On 1/12/2023 02:01, Tvrtko Ursulin wrote:
On 12/01/2023 02:53, john.c.harri...@intel.com wrote:
From: John Harrison
There was a report of error captures occurring without any hung
context being indicated despite the capture being initiated
On 12/01/2023 20:59, John Harrison wrote:
On 1/12/2023 02:15, Tvrtko Ursulin wrote:
On 12/01/2023 02:53, john.c.harri...@intel.com wrote:
From: John Harrison
Engine resets are supposed to never fail. But in the case when one
does (due to unknown reasons that normally come down to a missing
On 12/01/2023 20:46, John Harrison wrote:
On 1/12/2023 02:06, Tvrtko Ursulin wrote:
On 12/01/2023 02:53, john.c.harri...@intel.com wrote:
From: John Harrison
A hang situation has been observed where the only requests on the
context were either completed or not yet started according
On 11/01/2023 22:19, Daniel Vetter wrote:
On Tue, Jan 10, 2023 at 01:14:51PM +, Tvrtko Ursulin wrote:
On 06/01/2023 18:00, Daniel Vetter wrote:
On Fri, Jan 06, 2023 at 03:53:13PM +0100, Christian König wrote:
Am 06.01.23 um 11:53 schrieb Daniel Vetter:
On Fri, Jan 06, 2023 at 11:32
601 - 700 of 2125 matches
Mail list logo