Hi Juri,
On Tue, Nov 18, 2014 at 06:00:20PM +0000, Juri Lelli wrote:
>Hi,
>
>On 13/11/14 08:47, Wanpeng Li wrote:
>> Dl class will refuse the bandwidth being set to some value smaller 
>> than the currently allocated bandwidth in any of the root_domains 
>> through sched_rt_runtime_us and sched_rt_period_us. RT runtime will 
>> be set according to sched_rt_runtime_us before dl class verify if 
>> the new bandwidth is suitable in the case of !CONFIG_RT_GROUP_SCHED.
>> 
>> However, rt runtime will be corrupt if dl refuse the new bandwidth 
>> since there is no undo to reset the rt runtime to the old value. 
>> 
>
>Can't we just move sched_dl_global_constraints() before
>sched_rt_global_constraints(), and change the name of the
>former to sched_dl_global_validate() ?

Good idea, I will send out the next version later. ;-)

Regards,
Wanpeng Li

>
>Thanks,
>
>- Juri
>
>> This patch fix it by setting rt runtime after all kinds of rational 
>> checking in the case of !CONFIG_RT_GROUP_SCHED.
>> 
>> Signed-off-by: Wanpeng Li <[email protected]>
>> ---
>>  kernel/sched/core.c | 30 ++++++++++++++++--------------
>>  1 file changed, 16 insertions(+), 14 deletions(-)
>> 
>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> index 2e7578a..355dde3 100644
>> --- a/kernel/sched/core.c
>> +++ b/kernel/sched/core.c
>> @@ -7795,20 +7795,7 @@ static int sched_rt_can_attach(struct task_group *tg, 
>> struct task_struct *tsk)
>>  #else /* !CONFIG_RT_GROUP_SCHED */
>>  static int sched_rt_global_constraints(void)
>>  {
>> -    unsigned long flags;
>> -    int i, ret = 0;
>> -
>> -    raw_spin_lock_irqsave(&def_rt_bandwidth.rt_runtime_lock, flags);
>> -    for_each_possible_cpu(i) {
>> -            struct rt_rq *rt_rq = &cpu_rq(i)->rt;
>> -
>> -            raw_spin_lock(&rt_rq->rt_runtime_lock);
>> -            rt_rq->rt_runtime = global_rt_runtime();
>> -            raw_spin_unlock(&rt_rq->rt_runtime_lock);
>> -    }
>> -    raw_spin_unlock_irqrestore(&def_rt_bandwidth.rt_runtime_lock, flags);
>> -
>> -    return ret;
>> +    return 0;
>>  }
>>  #endif /* CONFIG_RT_GROUP_SCHED */
>>  
>> @@ -7890,6 +7877,21 @@ static int sched_rt_global_validate(void)
>>  
>>  static void sched_rt_do_global(void)
>>  {
>> +#ifndef CONFIG_RT_GROUP_SCHED
>> +    unsigned long flags;
>> +    int i;
>> +
>> +    raw_spin_lock_irqsave(&def_rt_bandwidth.rt_runtime_lock, flags);
>> +    for_each_possible_cpu(i) {
>> +            struct rt_rq *rt_rq = &cpu_rq(i)->rt;
>> +
>> +            raw_spin_lock(&rt_rq->rt_runtime_lock);
>> +            rt_rq->rt_runtime = global_rt_runtime();
>> +            raw_spin_unlock(&rt_rq->rt_runtime_lock);
>> +    }
>> +    raw_spin_unlock_irqrestore(&def_rt_bandwidth.rt_runtime_lock, flags);
>> +#endif
>> +
>>      def_rt_bandwidth.rt_runtime = global_rt_runtime();
>>      def_rt_bandwidth.rt_period = ns_to_ktime(global_rt_period());
>>  }
>> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to