On Thu, Nov 27, 2025 at 6:47 AM Boris Brezillon
<[email protected]> wrote:
>
> On Wed, 26 Nov 2025 14:57:32 -0800
> Chia-I Wu <[email protected]> wrote:
>
> > >  static void queue_stop(struct panthor_queue *queue,
> > > @@ -3202,6 +3215,18 @@ queue_run_job(struct drm_sched_job *sched_job)
> > >
> > >                 group_schedule_locked(group, BIT(job->queue_idx));
> > >         } else {
> > > +               u32 queue_mask = BIT(job->queue_idx);
> > > +               bool resume_tick = group_is_idle(group) &&
> > > +                                  (group->idle_queues & queue_mask) &&
> > > +                                  !(group->blocked_queues & queue_mask) 
> > > &&
> > > +                                  sched->resched_target == U64_MAX;
> > The logic here should be the same as the first part of
> > group_schedule_locked. I wonder if we can refactor that as well.
>
> I addressed everything you pointed out, except for this. The tests in
> group_schedule_locked() are two intricated with the rest of the logic to
> be easily extracted into some helper. I'm happy to review such a patch
> though.
Sounds good.
>
> >
> > > +
> > > +               /* We just added something to the queue, so it's no 
> > > longer idle. */
> > > +               group->idle_queues &= ~BIT(job->queue_idx);
> > group->idle_queues &= queue_mask;
Right, should have been "group->idle_queues &= ~queue_mask;".

> >
> > > +
> > > +               if (resume_tick)
> > > +                       sched_resume_tick(ptdev);
> > > +
> > >                 gpu_write(ptdev, CSF_DOORBELL(queue->doorbell_id), 1);
> > >                 if (!sched->pm.has_ref &&
> > >                     !(group->blocked_queues & BIT(job->queue_idx))) {

Reply via email to