Hi,
On 2023-10-17 11:09, Matthew Brost wrote:
> Rather than call free_job and run_job in same work item have a dedicated
> work item for each. This aligns with the design and intended use of work
> queues.
>
> v2:
>- Test for DMA_FENCE_FLAG_TIMESTAMP_BIT before setting
> timestamp in
-12 01:54, Luben Tuikov wrote:
> On 2023-10-12 00:31, Matthew Brost wrote:
>> On Wed, Oct 11, 2023 at 08:39:55PM -0400, Luben Tuikov wrote:
>>> On 2023-10-11 19:58, Matthew Brost wrote:
>>>> Rather than a global modparam for scheduling policy, move the scheduling
>&g
)
> - Adjust comment for drm_sched_tdr_queue_imm (Luben)
> v4:
> - Adjust commit message (Luben)
>
> Cc: Luben Tuikov
> Signed-off-by: Matthew Brost
Reviewed-by: Luben Tuikov
Yeah, this patch is very good now--thanks for updating it.
Regards,
Luben
> ---
> drivers/gpu/drm/scheduler/sched_
On 2023-10-17 09:22, Alex Deucher wrote:
> On Tue, Oct 17, 2023 at 12:52 AM Luben Tuikov wrote:
>>
>> A context priority value of AMD_CTX_PRIORITY_UNSET is now invalid--instead of
>> carrying it around and passing it to the Direct Rendering Manager--and it
>> beco
off-by: Luben Tuikov
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 3 ++-
include/drm/gpu_scheduler.h | 3 +--
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
index 092962b9306
-by: Luben Tuikov
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
index 0dc9c655c4fbdb..092962b93064fc 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
On 2023-10-16 11:29, Luben Tuikov wrote:
> On 2023-10-16 11:12, Matthew Brost wrote:
>> On Sat, Oct 14, 2023 at 08:09:31PM -0400, Luben Tuikov wrote:
>>> On 2023-10-13 22:49, Luben Tuikov wrote:
>>>> On 2023-10-11 19:58, Matthew Brost wrote:
>>>>> Rathe
On 2023-10-16 11:12, Matthew Brost wrote:
> On Sat, Oct 14, 2023 at 08:09:31PM -0400, Luben Tuikov wrote:
>> On 2023-10-13 22:49, Luben Tuikov wrote:
>>> On 2023-10-11 19:58, Matthew Brost wrote:
>>>> Rather than call free_job and run_job in same work item have a ded
On 2023-10-16 11:08, Matthew Brost wrote:
> On Fri, Oct 13, 2023 at 01:45:08PM -0400, Luben Tuikov wrote:
>> On 2023-10-11 19:58, Matthew Brost wrote:
>>> Rather than a global modparam for scheduling policy, move the scheduling
>>> policy to scheduler so user can co
On 2023-10-16 10:57, Matthew Brost wrote:
> On Fri, Oct 13, 2023 at 10:52:22PM -0400, Luben Tuikov wrote:
>> On 2023-10-11 19:58, Matthew Brost wrote:
>>> Also add a lockdep assert to drm_sched_start_timeout.
>>>
>>> Signed-off-by: Matthew Brost
>>
On 2023-10-16 11:00, Matthew Brost wrote:
> On Fri, Oct 13, 2023 at 10:06:18PM -0400, Luben Tuikov wrote:
>> On 2023-10-11 19:58, Matthew Brost wrote:
>>> DRM_SCHED_POLICY_SINGLE_ENTITY creates a 1 to 1 relationship between
>>> scheduler and entity. No priorities or
On 2023-10-13 22:49, Luben Tuikov wrote:
> On 2023-10-11 19:58, Matthew Brost wrote:
>> Rather than call free_job and run_job in same work item have a dedicated
>> work item for each. This aligns with the design and intended use of work
>> queues.
&
just comment for drm_sched_tdr_queue_imm (Luben)
>
> Cc: Luben Tuikov
> Signed-off-by: Matthew Brost
> ---
> drivers/gpu/drm/scheduler/sched_main.c | 18 +-
> include/drm/gpu_scheduler.h| 1 +
> 2 files changed, 18 insertions(+), 1 deletion(-)
On 2023-10-11 19:58, Matthew Brost wrote:
> Also add a lockdep assert to drm_sched_start_timeout.
>
> Signed-off-by: Matthew Brost
> Reviewed-by: Luben Tuikov
I don't remember sending a Reviewed-by email to this patch.
I'll add the R-V to the commit when I apply and push this
On 2023-10-11 19:58, Matthew Brost wrote:
> Rather than call free_job and run_job in same work item have a dedicated
> work item for each. This aligns with the design and intended use of work
> queues.
>
> v2:
>- Test for DMA_FENCE_FLAG_TIMESTAMP_BIT before setting
> timestamp in
On 2023-10-11 19:58, Matthew Brost wrote:
> DRM_SCHED_POLICY_SINGLE_ENTITY creates a 1 to 1 relationship between
> scheduler and entity. No priorities or run queue used in this mode.
> Intended for devices with firmware schedulers.
>
> v2:
> - Drop sched / rq union (Luben)
> v3:
> - Don't
v3d build (CI)
> - s/bad_policies/drm_sched_policy_mismatch/ (Luben)
> - Don't update modparam doc (Luben)
> v4:
> - Fix alignment in msm_ringbuffer_new (Luben / checkpatch)
>
> Reviewed-by: Luben Tuikov
> Signed-off-by: Matthew Brost
Reviewed-by: Luben Tuikov
> ---
> drive
Update comment for drm_sched_wqueue_enqueue
> - (Luben) Positive check for submit_wq in drm_sched_init
> - (Luben) s/alloc_submit_wq/own_submit_wq
>
> Signed-off-by: Matthew Brost
Reviewed-by: Luben Tuikov
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 2 +-
> dri
n-statement (Luben)
> - update drm_sched_wqueue_ready comment (Luben)
>
> Signed-off-by: Matthew Brost
> Cc: Luben Tuikov
Cc comes before S-O-B, but I can fix this when applying it, so don't worry about
this patch anymore. I'll also add Link: and so on, but this is all automated for
me so d
On 2023-10-11 22:10, Danilo Krummrich wrote:
>
>
> On 10/12/23 03:52, Luben Tuikov wrote:
>> Hi,
>>
>> Thanks for fixing the title and submitting a v2 of this patch. Comments
>> inlined below.
>>
>> On 2023-10-09 18:35, Danilo Krummrich wrote:
&
v3d build (CI)
> - s/bad_policies/drm_sched_policy_mismatch/ (Luben)
> - Don't update modparam doc (Luben)
> v4:
> - Fix alignment in msm_ringbuffer_new (Luben / checkpatch)
>
> Reviewed-by: Luben Tuikov
> Signed-off-by: Matthew Brost
Hi,
Forgot to mention this, but it is a very
On 2023-10-12 00:31, Matthew Brost wrote:
> On Wed, Oct 11, 2023 at 08:39:55PM -0400, Luben Tuikov wrote:
>> On 2023-10-11 19:58, Matthew Brost wrote:
>>> Rather than a global modparam for scheduling policy, move the scheduling
>>> policy to scheduler so user can co
Hi,
Thanks for fixing the title and submitting a v2 of this patch. Comments inlined
below.
On 2023-10-09 18:35, Danilo Krummrich wrote:
> Currently, job flow control is implemented simply by limiting the number
> of jobs in flight. Therefore, a scheduler is initialized with a
> submission limit
v3d build (CI)
> - s/bad_policies/drm_sched_policy_mismatch/ (Luben)
> - Don't update modparam doc (Luben)
> v4:
> - Fix alignment in msm_ringbuffer_new (Luben / checkpatch)
>
> Reviewed-by: Luben Tuikov
> Signed-off-by: Matthew Brost
Was the R-V added by hand? (As in edi
On 2023-10-05 00:06, Matthew Brost wrote:
> On Thu, Sep 28, 2023 at 12:14:12PM -0400, Luben Tuikov wrote:
>> On 2023-09-19 01:01, Matthew Brost wrote:
>>> Rather than call free_job and run_job in same work item have a dedicated
>>> work item for each. This aligns with
On 2023-10-06 19:43, Matthew Brost wrote:
> On Fri, Oct 06, 2023 at 03:14:04PM +, Matthew Brost wrote:
>> On Fri, Oct 06, 2023 at 08:59:15AM +0100, Tvrtko Ursulin wrote:
>>>
>>> On 05/10/2023 05:13, Luben Tuikov wrote:
>>>> On 2023-10-04 23:33, Matthew
On 2023-10-06 11:14, Matthew Brost wrote:
> On Fri, Oct 06, 2023 at 08:59:15AM +0100, Tvrtko Ursulin wrote:
>>
>> On 05/10/2023 05:13, Luben Tuikov wrote:
>>> On 2023-10-04 23:33, Matthew Brost wrote:
>>>> On Tue, Sep 26, 2023 at 11:32:10PM -0400, Luben Tuikov
On 2023-10-06 03:59, Tvrtko Ursulin wrote:
>
> On 05/10/2023 05:13, Luben Tuikov wrote:
>> On 2023-10-04 23:33, Matthew Brost wrote:
>>> On Tue, Sep 26, 2023 at 11:32:10PM -0400, Luben Tuikov wrote:
>>>> Hi,
>>>>
>>>> On 2023-09-19 01:01, M
On 2023-10-04 23:33, Matthew Brost wrote:
> On Tue, Sep 26, 2023 at 11:32:10PM -0400, Luben Tuikov wrote:
>> Hi,
>>
>> On 2023-09-19 01:01, Matthew Brost wrote:
>>> In XE, the new Intel GPU driver, a choice has made to have a 1 to 1
>>> mapping between a
On 2023-10-04 23:11, Matthew Brost wrote:
> On Sat, Sep 30, 2023 at 03:48:07PM -0400, Luben Tuikov wrote:
>> On 2023-09-29 17:53, Luben Tuikov wrote:
>>> Hi,
>>>
>>> On 2023-09-19 01:01, Matthew Brost wrote:
>>>> If the TDR is set to a very sm
>
> Cc: Alex Deucher
> Cc: "Christian König"
> Cc: "Pan, Xinhui"
> Cc: David Airlie
> Cc: Daniel Vetter
> Cc: "Gustavo A. R. Silva"
> Cc: Luben Tuikov
> Cc: Christophe JAILLET
> Cc: Felix Kuehling
> Cc: amd-...@lists.freedesk
On 2023-09-29 17:53, Luben Tuikov wrote:
> Hi,
>
> On 2023-09-19 01:01, Matthew Brost wrote:
>> If the TDR is set to a very small value it can fire before the
>> submission is started in the function drm_sched_start. The submission is
>> expected to running when the T
On 2023-09-19 01:01, Matthew Brost wrote:
> Add helper to queue TDR immediately for current and future jobs. This
> will be used in XE, new Intel GPU driver, to trigger the TDR to cleanup
Please use present tense, "is", in code, comments, commits, etc.
Is it "XE" or is it "Xe"? I always thought
Hi,
On 2023-09-19 01:01, Matthew Brost wrote:
> If the TDR is set to a value, it can fire before a job is submitted in
> drm_sched_main. The job should be always be submitted before the TDR
> fires, fix this ordering.
>
> v2:
> - Add to pending list before run_job, start TDR after (Luben,
Hi,
On 2023-09-19 01:01, Matthew Brost wrote:
> If the TDR is set to a very small value it can fire before the
> submission is started in the function drm_sched_start. The submission is
> expected to running when the TDR fires, fix this ordering so this
> expectation is always met.
>
>
Hi,
On 2023-09-19 01:01, Matthew Brost wrote:
> Also add a lockdep assert to drm_sched_start_timeout.
>
> Signed-off-by: Matthew Brost
Reviewed-by: Luben Tuikov
Thanks for this patch!
> ---
> drivers/gpu/drm/scheduler/sched_main.c | 23 +--
>
On 2023-09-19 01:01, Matthew Brost wrote:
> Rather than call free_job and run_job in same work item have a dedicated
> work item for each. This aligns with the design and intended use of work
> queues.
>
> v2:
>- Test for DMA_FENCE_FLAG_TIMESTAMP_BIT before setting
> timestamp in
On 2023-09-28 04:02, Boris Brezillon wrote:
> On Wed, 27 Sep 2023 13:54:38 +0200
> Christian König wrote:
>
>> Am 26.09.23 um 09:11 schrieb Boris Brezillon:
>>> On Mon, 25 Sep 2023 19:55:21 +0200
>>> Christian König wrote:
>>>
Am 25.09.23 um 14:55 schrieb Boris Brezillon:
> +The
Hi,
On 2023-09-19 01:01, Matthew Brost wrote:
> DRM_SCHED_POLICY_SINGLE_ENTITY creates a 1 to 1 relationship between
> scheduler and entity. No priorities or run queue used in this mode.
> Intended for devices with firmware schedulers.
>
> v2:
> - Drop sched / rq union (Luben)
> v3:
> -
On 2023-09-27 08:15, Christian König wrote:
> Am 27.09.23 um 14:11 schrieb Danilo Krummrich:
>> On 9/27/23 13:54, Christian König wrote:
>>> Am 26.09.23 um 09:11 schrieb Boris Brezillon:
On Mon, 25 Sep 2023 19:55:21 +0200
Christian König wrote:
> Am 25.09.23 um 14:55 schrieb
_new(struct msm_gpu
> *gpu, int id,
>
> ret = drm_sched_init(>sched, _sched_ops, NULL,
> num_hw_submissions, 0, sched_timeout,
> - NULL, NULL, to_msm_bo(ring->bo)->name, gpu->dev->dev);
> + NULL, NULL, to_msm_bo(ring->bo)
Hi,
On 2023-09-19 01:01, Matthew Brost wrote:
> In XE, the new Intel GPU driver, a choice has made to have a 1 to 1
> mapping between a drm_gpu_scheduler and drm_sched_entity. At first this
> seems a bit odd but let us explain the reasoning below.
>
> 1. In XE the submission order from multiple
Hi,
On 2023-09-26 20:13, Danilo Krummrich wrote:
> On 9/26/23 22:43, Luben Tuikov wrote:
>> Hi,
>>
>> On 2023-09-24 18:43, Danilo Krummrich wrote:
>>> Currently, job flow control is implemented simply by limiting the amount
>>> of jobs in flight
heduler *sched,
> bool full_recovery)
> spin_unlock(>job_list_lock);
> }
>
> - kthread_unpark(sched->thread);
> + drm_sched_submit_start(sched);
> }
> EXPORT_SYMBOL(drm_sched_start);
>
> @@ -1206,3 +1206,39 @@ v
Hi,
Please also CC me to the whole set, as opposed to just one patch of the set.
And so in the future.
Thanks!
--
Regards,
Luben
On 2023-09-26 16:43, Luben Tuikov wrote:
> Hi,
>
> On 2023-09-24 18:43, Danilo Krummrich wrote:
>> Currently, job flow control is implemented sim
Hi,
On 2023-09-24 18:43, Danilo Krummrich wrote:
> Currently, job flow control is implemented simply by limiting the amount
> of jobs in flight. Therefore, a scheduler is initialized with a
> submission limit that corresponds to a certain amount of jobs.
"certain"? How about this instead:
" ...
On 2023-09-19 01:58, Christian König wrote:
> Am 19.09.23 um 07:01 schrieb Matthew Brost:
>> Add scheduler submit ready, stop, and start helpers to hide the
>> implementation details of the scheduler from the drivers.
>>
>> Signed-off-by: Matthew Brost
>
> Reviewed-by: Christian König for this
On 2023-09-14 13:48, Matthew Brost wrote:
> On Wed, Sep 13, 2023 at 10:56:10PM -0400, Luben Tuikov wrote:
>> On 2023-09-11 22:16, Matthew Brost wrote:
>>> If the TDR is set to a value, it can fire before a job is submitted in
>>> drm_sched_main. The job should be always
On 2023-09-14 00:18, Luben Tuikov wrote:
> On 2023-09-11 22:16, Matthew Brost wrote:
>> Rather than a global modparam for scheduling policy, move the scheduling
>> policy to scheduler / entity so user can control each scheduler / entity
>> policy.
>>
>>
On 2023-09-11 22:16, Matthew Brost wrote:
> Rather than a global modparam for scheduling policy, move the scheduling
> policy to scheduler / entity so user can control each scheduler / entity
> policy.
>
> v2:
> - s/DRM_SCHED_POLICY_MAX/DRM_SCHED_POLICY_COUNT (Luben)
> - Only include policy
On 2023-09-12 11:02, Matthew Brost wrote:
> On Tue, Sep 12, 2023 at 09:29:53AM +0200, Boris Brezillon wrote:
>> On Mon, 11 Sep 2023 19:16:04 -0700
>> Matthew Brost wrote:
>>
>>> @@ -1071,6 +1063,7 @@ static int drm_sched_main(void *param)
>>> *
>>> * @sched: scheduler instance
>>> * @ops:
On 2023-09-11 22:16, Matthew Brost wrote:
> In XE, the new Intel GPU driver, a choice has made to have a 1 to 1
has --> was
> mapping between a drm_gpu_scheduler and drm_sched_entity. At first this
> seems a bit odd but let us explain the reasoning below.
It's totally fine! :-)
>
> 1. In XE
On 2023-09-11 22:16, Matthew Brost wrote:
> If the TDR is set to a value, it can fire before a job is submitted in
> drm_sched_main. The job should be always be submitted before the TDR
> fires, fix this ordering.
>
> v2:
> - Add to pending list before run_job, start TDR after (Luben, Boris)
>
On 2023-09-11 22:16, Matthew Brost wrote:
> Add helper to set TDR timeout and restart the TDR with new timeout
> value. This will be used in XE, new Intel GPU driver, to trigger the TDR
> to cleanup drm_sched_entity that encounter errors.
Do you just want to trigger the cleanup or do you really
On 2023-09-11 22:16, Matthew Brost wrote:
> Provide documentation to guide in ways to teardown an entity.
>
> Signed-off-by: Matthew Brost
> ---
> Documentation/gpu/drm-mm.rst | 6 ++
> drivers/gpu/drm/scheduler/sched_entity.c | 19 +++
> 2 files changed, 25
On 2023-09-11 22:16, Matthew Brost wrote:
> As a prerequisite to merging the new Intel Xe DRM driver [1] [2], we
> have been asked to merge our common DRM scheduler patches first.
>
> This a continuation of a RFC [3] with all comments addressed, ready for
> a full review, and hopefully in state
On 2023-07-30 21:09, Matthew Brost wrote:
> On Thu, May 04, 2023 at 01:28:12AM -0400, Luben Tuikov wrote:
>> On 2023-04-03 20:22, Matthew Brost wrote:
>>> Add helper to set TDR timeout and restart the TDR with new timeout
>>> value. This will be used in XE, new Intel GPU
On 2023-07-31 03:26, Boris Brezillon wrote:
> +the PVR devs
>
> On Mon, 31 Jul 2023 01:00:59 +
> Matthew Brost wrote:
>
>> On Thu, May 04, 2023 at 01:23:05AM -0400, Luben Tuikov wrote:
>>> On 2023-04-03 20:22, Matthew Brost wrote:
>>>> If the
On 2023-08-02 00:06, Matthew Brost wrote:
> On Mon, Jul 17, 2023 at 01:40:38PM -0400, Luben Tuikov wrote:
>> On 2023-07-16 03:51, Asahi Lina wrote:
>>> On 15/07/2023 16.14, Luben Tuikov wrote:
>>>> On 2023-07-14 04:21, Asahi Lina wrote:
>>>>> drm_s
On 2023-07-19 14:16, Konstantin Ryabitsev wrote:
> July 18, 2023 at 1:14 AM, "Luben Tuikov" wrote:
>>>> Not sure about other drivers--they can speak for themselves and the CC list
>>>> should include them--please use "dim add-missing-cc" and
On 2023-07-19 04:45, Christian König wrote:
> Am 16.07.23 um 09:51 schrieb Asahi Lina:
>> On 15/07/2023 16.14, Luben Tuikov wrote:
>>> On 2023-07-14 04:21, Asahi Lina wrote:
>>>> drm_sched_fini() currently leaves any pending jobs dangling, which
>>>> c
On 2023-07-17 22:35, Asahi Lina wrote:
> On 18/07/2023 00.55, Christian König wrote:
>> Am 15.07.23 um 16:14 schrieb aly...@rosenzweig.io:
>>> 15 July 2023 at 00:03, "Luben Tuikov" wrote:
>>>> On 2023-07-14 05:57, Christian König wrote:
>>&g
On 2023-07-17 18:45, Asahi Lina wrote:
> On 18/07/2023 02.40, Luben Tuikov wrote:
>> On 2023-07-16 03:51, Asahi Lina wrote:
>>> On 15/07/2023 16.14, Luben Tuikov wrote:
>>>> On 2023-07-14 04:21, Asahi Lina wrote:
>>>>> drm_sched_fini() currently leaves
On 2023-07-16 03:51, Asahi Lina wrote:
> On 15/07/2023 16.14, Luben Tuikov wrote:
>> On 2023-07-14 04:21, Asahi Lina wrote:
>>> drm_sched_fini() currently leaves any pending jobs dangling, which
>>> causes segfaults and other badness when job completion fences are
>&
On 2023-07-14 04:21, Asahi Lina wrote:
> drm_sched_fini() currently leaves any pending jobs dangling, which
> causes segfaults and other badness when job completion fences are
> signaled after the scheduler is torn down.
If there are pending jobs, ideally we want to call into the driver,
so that
On 2023-07-14 05:57, Christian König wrote:
> Am 14.07.23 um 11:49 schrieb Asahi Lina:
>> On 14/07/2023 17.43, Christian König wrote:
>>> Am 14.07.23 um 10:21 schrieb Asahi Lina:
A signaled scheduler fence can outlive its scheduler, since fences are
independencly reference counted.
On 2023-07-12 09:53, Christian König wrote:
> Am 12.07.23 um 15:38 schrieb Uwe Kleine-König:
>> Hello Maxime,
>>
>> On Wed, Jul 12, 2023 at 02:52:38PM +0200, Maxime Ripard wrote:
>>> On Wed, Jul 12, 2023 at 01:02:53PM +0200, Uwe Kleine-König wrote:
> Background is that this makes merge
/sched_fence.c
> +++ b/drivers/gpu/drm/scheduler/sched_fence.c
> @@ -35,7 +35,7 @@ static int __init drm_sched_fence_slab_init(void)
> {
> sched_fence_slab = kmem_cache_create(
> "drm_sched_fence", sizeof(struct drm_sched_fence), 0,
> - SLAB_HWCACHE_ALIGN, NULL);
> + SLAB_HWCACHE_ALIGN | SLAB_TYPESAFE_BY_RCU, NULL);
> if (!sched_fence_slab)
> return -ENOMEM;
>
Reviewed-by: Luben Tuikov
But let it simmer for 24 hours so Christian can see it too (CC-ed).
--
Regards,
Luben
mp of the last signaled one.
> + */
> if (count == 0)
> - return dma_fence_get_stub();
> + return dma_fence_allocate_private_stub(timestamp);
>
Hi Christian,
Thank you for clarifying the justification of this patch in the patch
description
On 2023-06-23 05:08, Christian König wrote:
> Some Android CTS is testing for that.
>
It's not entirely clear what "that" is, other than by the subject title
of the patch. Something like "Record and return the signalling time of
merged fences, as well as regular fences, since some Android CTS(?)
On 2023-06-23 04:03, Boris Brezillon wrote:
> On Fri, 23 Jun 2023 09:52:04 +0200
> Boris Brezillon wrote:
>
>> Drivers that can delegate waits to the firmware/GPU pass the scheduled
>> fence to drm_sched_job_add_dependency(), and issue wait commands to
>> the firmware/GPU at job submission time.
On 2023-06-22 05:56, Boris Brezillon wrote:
> On Wed, 21 Jun 2023 11:03:48 -0400
> Luben Tuikov wrote:
>
>> On 2023-06-21 10:53, Boris Brezillon wrote:
>>> On Wed, 21 Jun 2023 10:41:22 -0400
>>> Luben Tuikov wrote:
>>>
>>>> On 2023-0
On 2023-06-21 10:53, Boris Brezillon wrote:
> On Wed, 21 Jun 2023 10:41:22 -0400
> Luben Tuikov wrote:
>
>> On 2023-06-21 10:18, Boris Brezillon wrote:
>>> Hello Luben,
>>>
>>> On Wed, 21 Jun 2023 09:56:40 -0400
>>> Luben Tuikov wrote:
>
On 2023-06-21 10:18, Boris Brezillon wrote:
> Hello Luben,
>
> On Wed, 21 Jun 2023 09:56:40 -0400
> Luben Tuikov wrote:
>
>> On 2023-06-19 03:19, Boris Brezillon wrote:
>>> drm_sched_entity_kill_jobs_cb() logic is omitting the last fence popped
>>> from th
gt;
> Signed-off-by: Boris Brezillon
> Suggested-by: "Christian König"
> Reviewed-by: "Christian König"
These three lines would usually come after the CCs.
Regards,
Luben
> Cc: Frank Binns
> Cc: Sarah Walker
> Cc: Donald Robson
&g
On 2023-06-15 07:56, Christian König wrote:
> Instead of implementing this ourself.
Spellcheck: "ourselves".
Acked-by: Luben Tuikov
Regards,
Luben
>
> Signed-off-by: Christian König
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 52 -
On 2023-06-15 07:56, Christian König wrote:
> Multiple drivers came up with the requirement to measure how
> much time each submission spend on the hw.
"spends"
>
> A previous attempt of accounting this had to be reverted because
> hw submissions can live longer than the entity originally
>
ore just mostly revert the changes to amdgpu.
>
> Signed-off-by: Christian König
Add a fixes-tag,
Fixes: 8ee3a52e3f35e0 ("drm/gpu-sched: fix force APP kill hang(v4)")
Acked-by: Luben Tuikov
Regards,
Luben
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 59 ++-
Thanks!
Reviewed-by: Luben Tuikov
Applied to drm-misc-fixes.
Regards,
Luben
On 2023-05-17 08:52, Vladislav Efanov wrote:
> The rq pointer points inside the drm_gpu_scheduler structure. Thus
> it can't be NULL.
>
> Found by Linux Verification Center (linuxtesting.org) with SVACE
On 2023-05-17 19:35, Luben Tuikov wrote:
> Rename drm_sched_wakeup() to drm_sched_wakeup_if_canqueue() since the former
> is misleading, as it wakes up the GPU scheduler _only if_ more jobs can be
> queued to the underlying hardware.
>
> This distinction is important to make,
Christian König
Cc: Alex Deucher
Signed-off-by: Luben Tuikov
---
drivers/gpu/drm/scheduler/sched_entity.c | 4 ++--
drivers/gpu/drm/scheduler/sched_main.c | 6 +++---
include/drm/gpu_scheduler.h | 2 +-
3 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/drive
Rename drm_sched_ready() to drm_sched_can_queue(). "ready" can mean many
things and is thus meaningless in this context. Instead, rename to a name
which precisely conveys what is being checked.
Cc: Christian König
Cc: Alex Deucher
Signed-off-by: Luben Tuikov
Reviewed-by: Al
Ignore this series--I'll repost without the duplication.
Regards,
Luben
On 2023-05-17 19:08, Luben Tuikov wrote:
> Rename drm_sched_wakeup() to drm_sched_wakeup_if_canqueue() since the former
> is misleading, as it wakes up the GPU scheduler _only if_ more jobs can be
> queued to the u
other conditions are also true, e.g. when there
are jobs to be cleaned. For instance, a user might want to wake up the
scheduler only because there are more jobs to clean, but whether we can queue
more jobs is irrelevant.
Cc: Christian König
Cc: Alex Deucher
Signed-off-by: Luben Tuikov
Christian König
Cc: Alex Deucher
Signed-off-by: Luben Tuikov
---
drivers/gpu/drm/scheduler/sched_entity.c | 4 ++--
drivers/gpu/drm/scheduler/sched_main.c | 6 +++---
include/drm/gpu_scheduler.h | 2 +-
3 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/drive
Rename drm_sched_ready() to drm_sched_can_queue(). "ready" can mean many
things and is thus meaningless in this context. Instead, rename to a name
which precisely conveys what is being checked.
Cc: Christian König
Cc: Alex Deucher
Signed-off-by: Luben Tuikov
Reviewed-by: Al
On 2023-05-17 16:43, Alex Deucher wrote:
> On Wed, May 17, 2023 at 3:04 PM Luben Tuikov wrote:
>>
>> Rename drm_sched_wakeup() to drm_sched_wakeup_if_canqueue() since the former
>
> I think drm_sched_wakeup_if_can_queue() looks cleaner.
Yeah, I can change it to this--I was
other conditions are also true, e.g. when there
are jobs to be cleaned. For instance, a user might want to wake up the
scheduler only because there are more jobs to clean, but whether we can queue
more jobs is irrelevant.
Cc: Christian König
Cc: Alex Deucher
Signed-off-by: Luben Tuikov
Rename drm_sched_ready() to drm_sched_can_queue(). "ready" can mean many
things and is thus meaningless in this context. Instead, rename to a name
which precisely conveys what is being checked.
Cc: Christian König
Cc: Alex Deucher
Signed-off-by: Luben Tuikov
---
drivers/gpu/drm
On 2023-05-10 10:24, vitaly prosyak wrote:
>
> On 2023-05-10 10:19, Luben Tuikov wrote:
>> On 2023-05-10 09:51, vitaly.pros...@amd.com wrote:
>>> From: Vitaly Prosyak
>>>
>>> During an IGT GPU reset test we see again oops despite of
>>> commit
occur.
>
> Use the field timeout_wq to prevent oops for uninitialized schedulers.
> The field could be initialized by the work queue of resetting the domain.
>
> Fixes: 0c8c901aaaebc9 ("drm/sched: Check scheduler ready before calling
> timeout handling")
>
> v1: Corr
On 2023-05-09 17:43, vitaly.pros...@amd.com wrote:
> From: Vitaly Prosyak
>
> During an IGT GPU reset test we see again oops despite of
> commit 0c8c901aaaebc9bf8bf189ffc116e678f7a2dc16
> drm/sched: Check scheduler ready before calling timeout handling.
You can probably use the more succinct
On 2023-04-03 20:22, Matthew Brost wrote:
> Add generic schedule message interface which sends messages to backend
> from the drm_gpu_scheduler main submission thread. The idea is some of
> these messages modify some state in drm_sched_entity which is also
> modified during submission. By
On 2023-04-03 20:22, Matthew Brost wrote:
> Add helper to set TDR timeout and restart the TDR with new timeout
> value. This will be used in XE, new Intel GPU driver, to trigger the TDR
> to cleanup drm_sched_entity that encounter errors.
>
> Signed-off-by: Matthew Brost
> ---
>
On 2023-04-03 20:22, Matthew Brost wrote:
> If the TDR is set to a value, it can fire before a job is submitted in
> drm_sched_main. The job should be always be submitted before the TDR
> fires, fix this ordering.
>
> Signed-off-by: Matthew Brost
> ---
> drivers/gpu/drm/scheduler/sched_main.c |
Hi Christian,
Patch is,
Reviewed-by: Luben Tuikov
Regards,
Luben
On 2023-04-27 08:27, Christian König wrote:
> When no hw fence is provided for a job that means that the job didn't
> executed.
>
> Signed-off-by: Christian König
> ---
> drivers/gpu/drm/scheduler/sched_
On 2023-04-19 06:07, Lucas Stach wrote:
> Am Mittwoch, dem 19.04.2023 um 10:53 +0100 schrieb Steven Price:
>> On 19/04/2023 10:44, Lucas Stach wrote:
>>> Hi Steven,
>>>
>>> Am Mittwoch, dem 19.04.2023 um 10:39 +0100 schrieb Steven Price:
On 18/04/2023 11:04, Danilo Krummrich wrote:
> It
Reviewed-by: Luben Tuikov
and applied to drm-misc-next.
Thanks!
Regards,
Luben
On 2023-04-18 06:04, Danilo Krummrich wrote:
> It already happend a few times that patches slipped through which
> implemented access to an entity through a job that was already removed
> from the entit
On 2023-04-11 14:13, Danilo Krummrich wrote:
> On 4/5/23 19:39, Luben Tuikov wrote:
>> On 2023-03-31 01:59, Christian König wrote:
>>> Am 31.03.23 um 02:06 schrieb Danilo Krummrich:
>>>> It already happend a few times that patches slipped through which
>>>&g
On 2023-04-06 06:45, Lucas Stach wrote:
> Am Donnerstag, dem 06.04.2023 um 10:27 +0200 schrieb Daniel Vetter:
>> On Thu, 6 Apr 2023 at 10:22, Christian König
>> wrote:
>>>
>>> Am 05.04.23 um 18:09 schrieb Luben Tuikov:
>>>> On 2023-04-05 10:05,
101 - 200 of 390 matches
Mail list logo