On Mon, 02 Apr, at 09:49:54AM, Davidlohr Bueso wrote:
> 
> We can get rid of it be the "traditional" means of adding an 
> update_rq_clock() call
> after acquiring the rq->lock in do_sched_rt_period_timer().
> 
> The case for the rt task throttling (which this workload also hits) can be 
> ignored in
> that the skip_update call is actually bogus and quite the contrary (the 
> request bits
> are removed/reverted). By setting RQCF_UPDATED we really don't care if the 
> skip is
> happening or not and will therefore make the assert_clock_updated() check 
> happy.
> 
> Signed-off-by: Davidlohr Bueso <dbu...@suse.de>
> ---
>  kernel/sched/rt.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
> index 86b77987435e..ad13e6242481 100644
> --- a/kernel/sched/rt.c
> +++ b/kernel/sched/rt.c
> @@ -839,6 +839,8 @@ static int do_sched_rt_period_timer(struct rt_bandwidth 
> *rt_b, int overrun)
>                       continue;
>  
>               raw_spin_lock(&rq->lock);
> +             update_rq_clock(rq);
> +
>               if (rt_rq->rt_time) {
>                       u64 runtime;

Looks good to me.

Reviewed-by: Matt Fleming <m...@codeblueprint.co.uk>

Reply via email to