On 10/10/2025 09:55, Philipp Stanner wrote:
On Wed, 2025-10-08 at 09:53 +0100, Tvrtko Ursulin wrote:
Helper operates on the run queue so lets make that the primary argument.

Signed-off-by: Tvrtko Ursulin <[email protected]>
Cc: Christian König <[email protected]>
Cc: Danilo Krummrich <[email protected]>
Cc: Matthew Brost <[email protected]>
Cc: Philipp Stanner <[email protected]>
---
  drivers/gpu/drm/scheduler/sched_main.c | 8 ++++----
  1 file changed, 4 insertions(+), 4 deletions(-)

That's a new patch from the RFC, isn't it?

And it's a general code improvement that is not related to CFS. I think
I mentioned it a few times already that a series is easier to review
and workflows are simplified if generic-improvement patches are
branched out and sent separately.

I thought you had agreed with that?

Hm not sure. My workflow is definitely easier if this work is a single unit throughout.

Anyway, with this change it still far from consistency, so how much of an improvement it brings is open to debate. The general idea is that functions in sched_rq.c operate on sched_rq, which is the first argument, and by the end of the series the second argument disappears:

void drm_sched_rq_init(struct drm_sched_rq *rq)
{
        spin_lock_init(&rq->lock);
        INIT_LIST_HEAD(&rq->entities);
        rq->rb_tree_root = RB_ROOT_CACHED;
        rq->head_prio = -1;
}

int drm_sched_init(struct drm_gpu_scheduler *sched, const struct drm_sched_init_args *args)
{
...
        drm_sched_rq_init(&sched->rq);

But again, even at that point the code base is still not fully consistent in this respect aka needs more work. Not least you recently asked to rename drm_sched_rq_select_entity(rq) to drm_sched_select_entity(sched). So maybe you disagree with this patch completely and would prefer drm_sched_rq_init(sched). I don't know. Anyway, if you r-b it is trivial to send separately and merge. Or if you disapprove I will just drop this patch and rebase.

Regards,

Tvrtko

diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
b/drivers/gpu/drm/scheduler/sched_main.c
index 46119aacb809..8b8c55b25762 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -174,13 +174,13 @@ void drm_sched_rq_update_fifo_locked(struct 
drm_sched_entity *entity,
  /**
   * drm_sched_rq_init - initialize a given run queue struct
   *
+ * @rq: scheduler run queue
   * @sched: scheduler instance to associate with this run queue
- * @rq: scheduler run queue
   *
   * Initializes a scheduler runqueue.
   */
-static void drm_sched_rq_init(struct drm_gpu_scheduler *sched,
-                             struct drm_sched_rq *rq)
+static void drm_sched_rq_init(struct drm_sched_rq *rq,
+                             struct drm_gpu_scheduler *sched)
  {
        spin_lock_init(&rq->lock);
        INIT_LIST_HEAD(&rq->entities);
@@ -1353,7 +1353,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, const 
struct drm_sched_init_
                sched->sched_rq[i] = kzalloc(sizeof(*sched->sched_rq[i]), 
GFP_KERNEL);
                if (!sched->sched_rq[i])
                        goto Out_unroll;
-               drm_sched_rq_init(sched, sched->sched_rq[i]);
+               drm_sched_rq_init(sched->sched_rq[i], sched);
        }
  init_waitqueue_head(&sched->job_scheduled);


Reply via email to