Signed-off-by: Morten Rasmussen
Signed-off-by: Steve Muckle
---
kernel/sched/fair.c | 11 +++
1 file changed, 11 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1093873..95b83c4 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4737,6 +4737,17
d-off-by: Vincent Guittot
Signed-off-by: Steve Muckle
---
kernel/sched/rt.c | 49 -
1 file changed, 48 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 8ec86ab..9694204 100644
--- a/kernel/sched/rt.c
+++ b/k
flag in sched.h that is only set at
fork() time and it is then consumed in enqueue_task_fair() for our purpose.
cc: Ingo Molnar
cc: Peter Zijlstra
Signed-off-by: Juri Lelli
Signed-off-by: Steve Muckle
---
kernel/sched/core.c | 2 +-
kernel/sched/fair.c | 9 +++--
kernel/sched/sched.h | 1 +
is also done to try to harm the performance of
a task the least.
Original fair-class only version authored by Juri Lelli
.
cc: Ingo Molnar
cc: Peter Zijlstra
Signed-off-by: Juri Lelli
Signed-off-by: Steve Muckle
---
kernel/sched/core.c | 41
kernel/sched
ready
fulfilled by __update_cpu_load so the call in sched_rt_avg_update,
which is part of the hotpath, is useless.
Signed-off-by: Vincent Guittot
Signed-off-by: Steve Muckle
---
kernel/sched/sched.h | 1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 90
Lelli
Signed-off-by: Steve Muckle
---
kernel/sched/fair.c | 21 -
1 file changed, 20 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1bfbbb7..880ceee 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6023,6 +6023,10 @@ static
st on fork()
sched/fair: cpufreq_sched triggers for load balancing
Michael Turquette (2):
cpufreq: introduce cpufreq_driver_is_slow
sched: scheduler-driven cpu frequency selection
Morten Rasmussen (1):
sched: Compute cpu capacity available at current frequency
Steve Muckle (1):
sched/f
, revised commit text]
CC: Ricky Liang
Signed-off-by: Michael Turquette
Signed-off-by: Juri Lelli
Signed-off-by: Steve Muckle
---
drivers/cpufreq/Kconfig | 20 +++
include/linux/cpufreq.h | 3 +
include/linux/sched.h| 8 +
kernel/sched/Makefile| 1 +
kernel
On 11/02/2015 06:02 AM, Peter Zijlstra wrote:
> On Wed, Oct 28, 2015 at 08:30:42AM +0530, Viresh Kumar wrote:
>> These are the last memories I have around upstreaming this governor:
>> http://marc.info/?l=linux-kernel&m=132867057910479&w=2
>>
>> Has anything changed after that? Or we decided to go
On 08/17/2015 05:19 AM, Juri Lelli wrote:
>> Nah, just maybe: (capacity << SCHED_CAPACITY_SHIFT) / capacity_orig_of()
>> > such that you don't have to export that knowledge to this thing.
>> >
> Oh, right. I guess we can just go with something like:
>
> req_cap = get_cpu_usage(cpu) * capacity_ma
On 10/09/2015 02:14 AM, Juri Lelli wrote:
>> Though I understand the initial stated motivation here (avoiding a
>> > redundant capacity request upon idle entry), releasing the CPU's
>> > capacity request altogether on idle seems like it could be a contentious
>> > policy decision.
>> >
>> > An exa
Hi Juri,
On 07/07/2015 11:24 AM, Morten Rasmussen wrote:
> From: Juri Lelli
>
> When a CPU is going idle it is pointless to ask for an OPP update as we
> would wake up another task only to ask for the same capacity we are already
> running at (utilization gets moved to blocked_utilization). We
On 08/25/2015 03:45 AM, Juri Lelli wrote:
> But, it is true that if the above events happened the other way around
> (we trigger an update after load balancing and a new task arrives), we
> may miss the opportunity to jump to max with the new task. In my mind
> this is probably not a big deal, as w
On 08/14/2015 06:02 AM, Morten Rasmussen wrote:
> To be sure not to break smp_nice, we have defined over-utilization as
> when:
>
> cpu_rq(any)::cfs::avg::util_avg + margin > cpu_rq(any)::capacity
>
> is true for any cpu in the system. IOW, as soon as one cpu is (nearly)
> 100% utilized, we switc
Hi Punit,
On 09/28/2015 09:48 AM, Punit Agrawal wrote:
> Hi Mike,
>
> I ran into an issue when using this patch. Posting it here as this is
> the latest posting I can find.
>
> Morten Rasmussen writes:
>
>> From: Michael Turquette
>>
...
>> diff --git a/include/linux/cpufreq.h b/include/linux
On 09/20/2015 03:03 PM, Leo Yan wrote:
> In this case of CPU is running at fmax, it's true that
> task_fits_capacity() will return true. But here i think
> cpu_overutilized() also will return true, so that means scheduler will
> go back to use CFS's old way for loading balance. Finally tasks also
>
On 09/18/2015 03:34 AM, Dietmar Eggemann wrote:
>> Here should consider scenario for two groups have same capacity?
>> This will benefit for the case LITTLE.LITTLE. So the code will be
>> looks like below:
>>
>> int target_sg_cpu = INT_MAX;
>>
>> if (capacity_of(max_cap_cpu) <= target_max
On 09/15/2015 08:19 AM, Peter Zijlstra wrote:
> Please flip the argument around; providing lots of knobs for vendors to
> do $magic with is _NOT_ a good thing.
>
> The whole out-of-tree cpufreq governor hack fest Android thing is a
> complete and utter fail on all levels. Its the embedded, ship, f
On 09/15/2015 08:00 AM, Patrick Bellasi wrote:
>> Agreed, though I also think those tunable values might also change for a
>> given set of tasks in different circumstances.
>
> Could you provide an example?
>
> In my view the per-task support should be exploited just for quite
> specialized tasks,
Hi Patrick,
On 09/11/2015 04:09 AM, Patrick Bellasi wrote:
>> It's also worth noting that mobile vendors typically add all sorts of
>> hacks on top of the existing cpufreq governors which further complicate
>> policy.
>
> Could it be that many of the hacks introduced by vendors are just
> there t
Hi Patrick,
On 09/03/2015 02:18 AM, Patrick Bellasi wrote:
> In my view, one of the main goals of sched-DVFS is actually that to be
> a solid and generic replacement of different CPUFreq governors.
> Being driven by the scheduler, sched-DVFS can exploit information on
> CPU demand of active tasks
Hi Morten, Dietmar,
On 08/14/2015 09:23 AM, Morten Rasmussen wrote:
...
> + * cfs_rq.avg.util_avg is the sum of running time of runnable tasks plus the
> + * recent utilization of currently non-runnable tasks on a CPU. It represents
> + * the amount of utilization of a CPU in the range [0..capacit
201 - 222 of 222 matches
Mail list logo