On Thu, May 17, 2018 at 05:23:12PM +0200, Juri Lelli wrote:
> On 17/05/18 07:43, Joel Fernandes wrote:
> > On Thu, May 17, 2018 at 04:28:23PM +0200, Juri Lelli wrote:
> > [...]
> > > > > > We would need more locking stuff in the work handler in that case 
> > > > > > and
> > > > > > I think there maybe a chance of missing the request in that solution
> > > > > > if the request happens right at the end of when sugov_work returns.
> > > > > 
> > > > > Mmm, true. Ideally we might want to use some sort of queue where to
> > > > > atomically insert requests and then consume until queue is empty from
> > > > > sugov kthread.
> > > > 
> > > > IMO we don't really need a queue or anything, we should need the 
> > > > kthread to
> > > > process the *latest* request it sees since that's the only one that 
> > > > matters.
> > > 
> > > Yep, makes sense.
> > > 
> > > > > But, I guess that's going to be too much complexity for an (hopefully)
> > > > > corner case.
> > > > 
> > > > I thought of this corner case too, I'd argue its still an improvement 
> > > > over
> > > > not doing anything, but we could tighten this up a bit more if you 
> > > > wanted by
> > > 
> > > Indeed! :)
> > > 
> > > > doing something like this on top of my patch. Thoughts?
> > > > 
> > > > ---8<-----------------------
> > > > 
> > > > diff --git a/kernel/sched/cpufreq_schedutil.c 
> > > > b/kernel/sched/cpufreq_schedutil.c
> > > > index a87fc281893d..e45ec24b810b 100644
> > > > --- a/kernel/sched/cpufreq_schedutil.c
> > > > +++ b/kernel/sched/cpufreq_schedutil.c
> > > > @@ -394,6 +394,7 @@ static void sugov_work(struct kthread_work *work)
> > > >         unsigned int freq;
> > > >         unsigned long flags;
> > > >  
> > > > +redo_work:
> > > >         /*
> > > >          * Hold sg_policy->update_lock shortly to handle the case where:
> > > >          * incase sg_policy->next_freq is read here, and then updated by
> > > > @@ -409,6 +410,9 @@ static void sugov_work(struct kthread_work *work)
> > > >         __cpufreq_driver_target(sg_policy->policy, freq,
> > > >                                 CPUFREQ_RELATION_L);
> > > >         mutex_unlock(&sg_policy->work_lock);
> > > > +
> > > > +       if (sg_policy->work_in_progress)
> > > > +               goto redo_work;
> > > 
> > > Didn't we already queue up another irq_work at this point?
> > 
> > Oh yeah, so the case I was thinking was if the kthread was active, while the
> > new irq_work raced and finished.
> > 
> > Since that would just mean a new kthread_work for the worker, the loop I
> > mentioned above isn't needed. Infact there's already a higher level loop
> > taking care of it in kthread_worker_fn as below. So the governor thread
> > will not sleep and we'll keep servicing all pending requests till
> > they're done. So I think we're good with my original patch.
> > 
> > repeat:
> > [...]
> > if (!list_empty(&worker->work_list)) {
> >             work = list_first_entry(&worker->work_list,
> >                                     struct kthread_work, node);
> >             list_del_init(&work->node);
> >     }
> >     worker->current_work = work;
> >     spin_unlock_irq(&worker->lock);
> > 
> >     if (work) {
> >             __set_current_state(TASK_RUNNING);
> >             work->func(work);
> >     } else if (!freezing(current))
> >             schedule();
> > 
> >     try_to_freeze();
> >     cond_resched();
> >     goto repeat;
> 
> Ah, right. Your original patch LGTM then. :)

Cool, thanks. :)

> Maybe add a comment about this higher level loop?

Sure, will do.

thanks,

- Joel

Reply via email to