On Fri, 2025-04-25 at 11:20 +0100, Tvrtko Ursulin wrote: > There is no need to keep entities with no jobs in the tree so lets > remove > it once the last job is consumed. This keeps the tree smaller which > is > nicer and more efficient as entities are removed and re-added on > every > popped job.
That there is no need to do so doesn't imply that you can't keep them around. The commit message doesn't make the motivation clear > > Signed-off-by: Tvrtko Ursulin <tvrtko.ursu...@igalia.com> > Cc: Christian König <christian.koe...@amd.com> > Cc: Danilo Krummrich <d...@kernel.org> > Cc: Matthew Brost <matthew.br...@intel.com> > Cc: Philipp Stanner <pha...@kernel.org> > --- > drivers/gpu/drm/scheduler/sched_rq.c | 24 +++++++++++++----------- > 1 file changed, 13 insertions(+), 11 deletions(-) Since this doesn't simplify the code base, I think the only justification would be a somewhat decent performance gain. Does this patch result in that? Otherwise it's probably better to keep git-blame intact here. P. > > diff --git a/drivers/gpu/drm/scheduler/sched_rq.c > b/drivers/gpu/drm/scheduler/sched_rq.c > index d477a027feb9..2cde89cf25fb 100644 > --- a/drivers/gpu/drm/scheduler/sched_rq.c > +++ b/drivers/gpu/drm/scheduler/sched_rq.c > @@ -149,25 +149,27 @@ void drm_sched_rq_pop_entity(struct > drm_sched_entity *entity) > { > struct drm_sched_job *next_job; > struct drm_sched_rq *rq; > - ktime_t ts; > > /* > * Update the entity's location in the min heap according to > * the timestamp of the next job, if any. > */ > + spin_lock(&entity->lock); > + rq = entity->rq; > + spin_lock(&rq->lock); > next_job = drm_sched_entity_queue_peek(entity); > - if (!next_job) > - return; > + if (next_job) { > + ktime_t ts; > > - if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) > - ts = next_job->submit_ts; > - else > - ts = drm_sched_rq_get_rr_deadline(rq); > + if (drm_sched_policy == DRM_SCHED_POLICY_FIFO) > + ts = next_job->submit_ts; > + else > + ts = drm_sched_rq_get_rr_deadline(rq); > > - spin_lock(&entity->lock); > - rq = entity->rq; > - spin_lock(&rq->lock); > - drm_sched_rq_update_fifo_locked(entity, rq, ts); > + drm_sched_rq_update_fifo_locked(entity, rq, ts); > + } else { > + drm_sched_rq_remove_fifo_locked(entity, rq); > + } > spin_unlock(&rq->lock); > spin_unlock(&entity->lock); > }