On 22/03/2019 12.57, Suthikulpanit, Suravee wrote:
> Introduce a helper function for setting lapic parameters when
> activate/deactivate apicv.
>
> Signed-off-by: Suravee Suthikulpanit
> ---
> arch/x86/kvm/lapic.c | 23 ++-
> arch/x86/kvm/lapic.h | 1 +
> 2 files changed,
On 22/03/2019 12.57, Suthikulpanit, Suravee wrote:
> When activate / deactivate AVIC during runtime, all vcpus has to be
> operating in the same mode. So, introduce new interface to request
> all vCPUs to activate/deactivate APICV.
If we need to switch APICV on and off on all vCPUs of a VM,
On 22/03/2019 12.57, Suthikulpanit, Suravee wrote:
> Activate/deactivate AVIC requires setting/unsetting the memory region used
> for APIC_ACCESS_PAGE_PRIVATE_MEMSLOT. So, re-factor avic_init_access_page()
> to avic_setup_access_page() and add srcu_read_lock/unlock, which are needed
> to allow
Hi Suravee.
I wonder, how this interacts with Hyper-V SynIC; see comments below.
On 22/03/2019 12.57, Suthikulpanit, Suravee wrote:
> AMD AVIC does not support ExtINT. Therefore, AVIC must be temporary
> deactivated and fall back to using legacy interrupt injection via
> vINTR and interrupt
Am 14.02.19 um 22:46 schrieb Jan H. Schönherr:
Some systems experience regular interruptions (60 Hz SMI?), that prevent
the quick PIT calibration from succeeding: individual interruptions can be
so long, that the PIT MSB is observed to decrement by 2 or 3 instead of 1.
The existing code cannot
Am 12.02.19 um 12:57 schrieb Thomas Gleixner:
On Tue, 29 Jan 2019, Thomas Gleixner wrote:
On Tue, 29 Jan 2019, Jan H. Schönherr wrote:
Am 29.01.2019 um 11:23 schrieb Jan H. Schönherr:
+calibrate:
+ /*
+* Extrapolate the error and fail fast if the error will
+* never
the very first reads.
Signed-off-by: Jan H. Schönherr
---
v2:
- Dropped the other hacky patch for the time being.
- Fixed the early exit check.
- Hopefully fixed all inaccurate math in v1.
- Extended comments.
arch/x86/kernel/tsc.c | 91 +++
1 file changed, 57
the very first reads.
Signed-off-by: Jan H. Schönherr
---
arch/x86/kernel/tsc.c | 80 +--
1 file changed, 46 insertions(+), 34 deletions(-)
diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
index e9f777bfed40..a005e0aa215e 100644
--- a/arch/x86/kernel
Fatal1ty X399 Professional Gaming, BIOS P3.30.
This unexplained behavior goes away as soon as the sibling CPU of the
boot CPU is brought back up. Hence, add a hack to restore the sibling
CPU before all others on unfreeze. This keeps the TSC stable.
Signed-off-by: Jan H. Schönherr
---
kernel/cpu.c
case) hasn't been resumed yet. (I did some experiments
with CPU hotplug before and after suspend, but apart from reproducing
the issue and verifying the "fix", I got nowhere.)
The patches are against v4.20.
Jan H. Schönherr (2):
x86/tsc: Allow quick PIT calibration despite interrupt
On 23/11/2018 17.51, Frederic Weisbecker wrote:
> On Tue, Sep 18, 2018 at 03:22:13PM +0200, Jan H. Schönherr wrote:
>> On 09/17/2018 11:48 AM, Peter Zijlstra wrote:
>>> Right, so the whole bandwidth thing becomes a pain; the simplest
>>> solution is to detect the
On 23/11/2018 17.51, Frederic Weisbecker wrote:
> On Tue, Sep 18, 2018 at 03:22:13PM +0200, Jan H. Schönherr wrote:
>> On 09/17/2018 11:48 AM, Peter Zijlstra wrote:
>>> Right, so the whole bandwidth thing becomes a pain; the simplest
>>> solution is to detect the
On 27/10/2018 01.05, Subhra Mazumdar wrote:
>
>
>> D) What can I *not* do with this?
>> -
>>
>> Besides the missing load-balancing within coscheduled task-groups, this
>> implementation has the following properties, which might be considered
>> short-comings.
>>
On 27/10/2018 01.05, Subhra Mazumdar wrote:
>
>
>> D) What can I *not* do with this?
>> -
>>
>> Besides the missing load-balancing within coscheduled task-groups, this
>> implementation has the following properties, which might be considered
>> short-comings.
>>
On 19/10/2018 02.26, Subhra Mazumdar wrote:
> Hi Jan,
Hi. Sorry for the delay.
> On 9/7/18 2:39 PM, Jan H. Schönherr wrote:
>> The collective context switch from one coscheduled set of tasks to another
>> -- while fast -- is not atomic. If a use-case needs the absolute gua
On 19/10/2018 02.26, Subhra Mazumdar wrote:
> Hi Jan,
Hi. Sorry for the delay.
> On 9/7/18 2:39 PM, Jan H. Schönherr wrote:
>> The collective context switch from one coscheduled set of tasks to another
>> -- while fast -- is not atomic. If a use-case needs the absolute gua
On 19/10/2018 17.45, Rik van Riel wrote:
> On Fri, 2018-10-19 at 17:33 +0200, Frederic Weisbecker wrote:
>> On Fri, Oct 19, 2018 at 11:16:49AM -0400, Rik van Riel wrote:
>>> On Fri, 2018-10-19 at 13:40 +0200, Jan H. Schönherr wrote:
>>>>
>>>> Now, it
On 19/10/2018 17.45, Rik van Riel wrote:
> On Fri, 2018-10-19 at 17:33 +0200, Frederic Weisbecker wrote:
>> On Fri, Oct 19, 2018 at 11:16:49AM -0400, Rik van Riel wrote:
>>> On Fri, 2018-10-19 at 13:40 +0200, Jan H. Schönherr wrote:
>>>>
>>>> Now, it
On 17/10/2018 04.09, Frederic Weisbecker wrote:
> On Fri, Sep 07, 2018 at 11:39:47PM +0200, Jan H. Schönherr wrote:
>> C) How does it work?
>>
[...]
>> For each task-group, the user can select at which level it should be
>> scheduled. If yo
On 17/10/2018 04.09, Frederic Weisbecker wrote:
> On Fri, Sep 07, 2018 at 11:39:47PM +0200, Jan H. Schönherr wrote:
>> C) How does it work?
>>
[...]
>> For each task-group, the user can select at which level it should be
>> scheduled. If yo
On 09/26/2018 11:05 PM, Nishanth Aravamudan wrote:
> On 26.09.2018 [10:25:19 -0700], Nishanth Aravamudan wrote:
>>
>> I found another issue today, while attempting to test (with 61/60
>> applied) separate coscheduling cgroups for vcpus and emulator threads
>> [the default configuration with
On 09/26/2018 11:05 PM, Nishanth Aravamudan wrote:
> On 26.09.2018 [10:25:19 -0700], Nishanth Aravamudan wrote:
>>
>> I found another issue today, while attempting to test (with 61/60
>> applied) separate coscheduling cgroups for vcpus and emulator threads
>> [the default configuration with
On 09/17/2018 02:25 PM, Peter Zijlstra wrote:
> On Fri, Sep 14, 2018 at 06:25:44PM +0200, Jan H. Schönherr wrote:
>
>> Assuming, there is a cgroup-less solution that can prevent simultaneous
>> execution of tasks on a core, when they're not supposed to. How would you
>> t
On 09/17/2018 02:25 PM, Peter Zijlstra wrote:
> On Fri, Sep 14, 2018 at 06:25:44PM +0200, Jan H. Schönherr wrote:
>
>> Assuming, there is a cgroup-less solution that can prevent simultaneous
>> execution of tasks on a core, when they're not supposed to. How would you
>> t
On 09/17/2018 03:37 PM, Peter Zijlstra wrote:
> On Fri, Sep 14, 2018 at 06:25:44PM +0200, Jan H. Schönherr wrote:
>> With gang scheduling as defined by Feitelson and Rudolph [6], you'd have to
>> explicitly schedule idle time. With coscheduling as defined by Ousterhout
>&
On 09/17/2018 03:37 PM, Peter Zijlstra wrote:
> On Fri, Sep 14, 2018 at 06:25:44PM +0200, Jan H. Schönherr wrote:
>> With gang scheduling as defined by Feitelson and Rudolph [6], you'd have to
>> explicitly schedule idle time. With coscheduling as defined by Ousterhout
>&
On 09/19/2018 11:53 PM, Subhra Mazumdar wrote:
> Can we have a more generic interface, like specifying a set of task ids
> to be co-scheduled with a particular level rather than tying this with
> cgroups? KVMs may not always run with cgroups and there might be other
> use cases where we might
On 09/19/2018 11:53 PM, Subhra Mazumdar wrote:
> Can we have a more generic interface, like specifying a set of task ids
> to be co-scheduled with a particular level rather than tying this with
> cgroups? KVMs may not always run with cgroups and there might be other
> use cases where we might
On 09/18/2018 04:40 PM, Rik van Riel wrote:
> On Fri, 2018-09-14 at 18:25 +0200, Jan H. Schönherr wrote:
>> On 09/14/2018 01:12 PM, Peter Zijlstra wrote:
>>> On Fri, Sep 07, 2018 at 11:39:47PM +0200, Jan H. Schönherr wrote:
>>>>
>>>> B) Why would I wa
On 09/18/2018 04:40 PM, Rik van Riel wrote:
> On Fri, 2018-09-14 at 18:25 +0200, Jan H. Schönherr wrote:
>> On 09/14/2018 01:12 PM, Peter Zijlstra wrote:
>>> On Fri, Sep 07, 2018 at 11:39:47PM +0200, Jan H. Schönherr wrote:
>>>>
>>>> B) Why would I wa
On 09/18/2018 04:35 PM, Rik van Riel wrote:
> On Tue, 2018-09-18 at 15:22 +0200, Jan H. Schönherr wrote:
[...]
> Task priorities in a flat runqueue are relatively straightforward, with
> vruntime scaling just like done for nice levels, but I have to admit
> that throttled gr
On 09/18/2018 04:35 PM, Rik van Riel wrote:
> On Tue, 2018-09-18 at 15:22 +0200, Jan H. Schönherr wrote:
[...]
> Task priorities in a flat runqueue are relatively straightforward, with
> vruntime scaling just like done for nice levels, but I have to admit
> that throttled gr
On 09/18/2018 03:38 PM, Peter Zijlstra wrote:
> On Tue, Sep 18, 2018 at 03:22:13PM +0200, Jan H. Schönherr wrote:
>> AFAIK, changing the affinity of a cpuset overwrites the individual
>> affinities of tasks
>> within them. Thus, it shouldn't be an issue.
>
> No, it
On 09/18/2018 03:38 PM, Peter Zijlstra wrote:
> On Tue, Sep 18, 2018 at 03:22:13PM +0200, Jan H. Schönherr wrote:
>> AFAIK, changing the affinity of a cpuset overwrites the individual
>> affinities of tasks
>> within them. Thus, it shouldn't be an issue.
>
> No, it
On 09/17/2018 11:48 AM, Peter Zijlstra wrote:
> On Sat, Sep 15, 2018 at 10:48:20AM +0200, Jan H. Schönherr wrote:
>> On 09/14/2018 06:25 PM, Jan H. Schönherr wrote:
>
>>> b) ability to move CFS RQs between CPUs: someone changed the affinity of
>>>a cpuset? No pro
On 09/17/2018 11:48 AM, Peter Zijlstra wrote:
> On Sat, Sep 15, 2018 at 10:48:20AM +0200, Jan H. Schönherr wrote:
>> On 09/14/2018 06:25 PM, Jan H. Schönherr wrote:
>
>>> b) ability to move CFS RQs between CPUs: someone changed the affinity of
>>>a cpuset? No pro
On 09/18/2018 02:33 AM, Subhra Mazumdar wrote:
> On 09/07/2018 02:39 PM, Jan H. Schönherr wrote:
>> A) Quickstart guide for the impatient.
>> --
>>
>> Here is a quickstart guide to set up coscheduling at core-level for
>> select
On 09/18/2018 02:33 AM, Subhra Mazumdar wrote:
> On 09/07/2018 02:39 PM, Jan H. Schönherr wrote:
>> A) Quickstart guide for the impatient.
>> --
>>
>> Here is a quickstart guide to set up coscheduling at core-level for
>> select
On 09/14/2018 06:25 PM, Jan H. Schönherr wrote:
> On 09/14/2018 01:12 PM, Peter Zijlstra wrote:
>>
>> There are known scalability problems with the existing cgroup muck; you
>> just made things a ton worse. The existing cgroup overhead is
>> significant, you also
On 09/14/2018 06:25 PM, Jan H. Schönherr wrote:
> On 09/14/2018 01:12 PM, Peter Zijlstra wrote:
>>
>> There are known scalability problems with the existing cgroup muck; you
>> just made things a ton worse. The existing cgroup overhead is
>> significant, you also
On 09/14/2018 01:12 PM, Peter Zijlstra wrote:
> On Fri, Sep 07, 2018 at 11:39:47PM +0200, Jan H. Schönherr wrote:
>> This patch series extends CFS with support for coscheduling. The
>> implementation is versatile enough to cover many different coscheduling
>> use-cases, w
On 09/14/2018 01:12 PM, Peter Zijlstra wrote:
> On Fri, Sep 07, 2018 at 11:39:47PM +0200, Jan H. Schönherr wrote:
>> This patch series extends CFS with support for coscheduling. The
>> implementation is versatile enough to cover many different coscheduling
>> use-cases, w
py.
Partly-reported-by: Nishanth Aravamudan
Signed-off-by: Jan H. Schönherr
---
kernel/sched/cosched.c | 2 ++
kernel/sched/fair.c| 35 ++-
2 files changed, 12 insertions(+), 25 deletions(-)
diff --git a/kernel/sched/cosched.c b/kernel/sched/cosched.c
index a1f0d3
py.
Partly-reported-by: Nishanth Aravamudan
Signed-off-by: Jan H. Schönherr
---
kernel/sched/cosched.c | 2 ++
kernel/sched/fair.c| 35 ++-
2 files changed, 12 insertions(+), 25 deletions(-)
diff --git a/kernel/sched/cosched.c b/kernel/sched/cosched.c
index a1f0d3
On 09/13/2018 01:15 AM, Nishanth Aravamudan wrote:
> [...] if I just try to set machine's
> cpu.scheduled to 1, with no other changes (not even changing any child
> cgroup's cpu.scheduled yet), I get the following trace:
>
> [16052.164259] [ cut here ]
> [16052.168973]
On 09/13/2018 01:15 AM, Nishanth Aravamudan wrote:
> [...] if I just try to set machine's
> cpu.scheduled to 1, with no other changes (not even changing any child
> cgroup's cpu.scheduled yet), I get the following trace:
>
> [16052.164259] [ cut here ]
> [16052.168973]
On 09/12/2018 09:34 PM, Jan H. Schönherr wrote:
> That said, I see a hang, too. It seems to happen, when there is a
> cpu.scheduled!=0 group that is not a direct child of the root task group.
> You seem to have "/sys/fs/cgroup/cpu/machine" as an intermediate group.
> (T
On 09/12/2018 09:34 PM, Jan H. Schönherr wrote:
> That said, I see a hang, too. It seems to happen, when there is a
> cpu.scheduled!=0 group that is not a direct child of the root task group.
> You seem to have "/sys/fs/cgroup/cpu/machine" as an intermediate group.
> (T
On 09/12/2018 02:24 AM, Nishanth Aravamudan wrote:
> [ I am not subscribed to LKML, please keep me CC'd on replies ]
>
> I tried a simple test with several VMs (in my initial test, I have 48
> idle 1-cpu 512-mb VMs and 2 idle 2-cpu, 2-gb VMs) using libvirt, none
> pinned to any CPUs. When I tried
On 09/12/2018 02:24 AM, Nishanth Aravamudan wrote:
> [ I am not subscribed to LKML, please keep me CC'd on replies ]
>
> I tried a simple test with several VMs (in my initial test, I have 48
> idle 1-cpu 512-mb VMs and 2 idle 2-cpu, 2-gb VMs) using libvirt, none
> pinned to any CPUs. When I tried
ghts are:
23: Data structures used for coscheduling.
24-26: Creation of root-task-group runqueue hierarchy.
39-40: Runqueue hierarchies for normal task groups.
41-42: Locking strategies under coscheduling.
47-49: Adjust core CFS code.
52: Adjust core CFS code.
54-56: A
ghts are:
23: Data structures used for coscheduling.
24-26: Creation of root-task-group runqueue hierarchy.
39-40: Runqueue hierarchies for normal task groups.
41-42: Locking strategies under coscheduling.
47-49: Adjust core CFS code.
52: Adjust core CFS code.
54-56: A
the other direction.
Adjust all users, simplifying many of them.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/core.c | 7 ++-
kernel/sched/debug.c | 2 +-
kernel/sched/fair.c | 36
kernel/sched/sched.h | 5 ++---
4 files changed, 21 insertions(+), 2
the other direction.
Adjust all users, simplifying many of them.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/core.c | 7 ++-
kernel/sched/debug.c | 2 +-
kernel/sched/fair.c | 36
kernel/sched/sched.h | 5 ++---
4 files changed, 21 insertions(+), 2
-off-by: Jan H. Schönherr
---
kernel/sched/core.c | 21 +++--
kernel/sched/sched.h | 6 ++
2 files changed, 25 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index fd1b0abd8474..c38a54f57e90 100644
--- a/kernel/sched/core.c
+++ b/kernel
-off-by: Jan H. Schönherr
---
kernel/sched/core.c | 21 +++--
kernel/sched/sched.h | 6 ++
2 files changed, 25 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index fd1b0abd8474..c38a54f57e90 100644
--- a/kernel/sched/core.c
+++ b/kernel
Prepare for future changes and refactor sync_throttle() to work with
a different set of arguments.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 13 ++---
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index
Prepare for future changes and refactor sync_throttle() to work with
a different set of arguments.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 13 ++---
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index
Factor out the logic to retrieve the parent CFS runqueue of another
CFS runqueue into its own function and replace open-coded variants.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 18 --
1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/kernel/sched
Factor out the logic to retrieve the parent CFS runqueue of another
CFS runqueue into its own function and replace open-coded variants.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 18 --
1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/kernel/sched
With scheduling domains sufficiently prepared, we can now initialize
the full hierarchy of runqueues and link it with the already existing
bottom level, which we set up earlier.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/core.c| 1 +
kernel/sched/cosched.c | 76
Scheduled task groups will bring coscheduling to Linux.
The actual functionality will be added successively.
Signed-off-by: Jan H. Schönherr
---
init/Kconfig | 11 +++
kernel/sched/Makefile | 1 +
kernel/sched/cosched.c | 9 +
3 files changed, 21 insertions
With scheduling domains sufficiently prepared, we can now initialize
the full hierarchy of runqueues and link it with the already existing
bottom level, which we set up earlier.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/core.c| 1 +
kernel/sched/cosched.c | 76
Scheduled task groups will bring coscheduling to Linux.
The actual functionality will be added successively.
Signed-off-by: Jan H. Schönherr
---
init/Kconfig | 11 +++
kernel/sched/Makefile | 1 +
kernel/sched/cosched.c | 9 +
3 files changed, 21 insertions
The code path is not yet adjusted for coscheduling. Disable
it for now.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 10 ++
1 file changed, 10 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 30e5ff30f442..8504790944bf 100644
--- a/kernel/sched
The code path is not yet adjusted for coscheduling. Disable
it for now.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 10 ++
1 file changed, 10 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 30e5ff30f442..8504790944bf 100644
--- a/kernel/sched
callers to use hrq_of() instead of rq_of() to derive the cpu
argument.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index fde1c4ba4bb4..a2945355f823 100644
--- a/kernel/sched
Provide variants of the task group CFS traversal constructs that also
reach the hierarchical runqueues. Adjust task group management functions
where necessary.
The most changes are in alloc_fair_sched_group(), where we now need to
be a bit more careful during initialization.
Signed-off-by: Jan H
(), which returns the leader's CPU runqueue.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 35 +++
1 file changed, 35 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8cba7b8fb6bd..24d01bf8f796 100644
--- a/kernel/sched/fair.c
callers to use hrq_of() instead of rq_of() to derive the cpu
argument.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index fde1c4ba4bb4..a2945355f823 100644
--- a/kernel/sched
Provide variants of the task group CFS traversal constructs that also
reach the hierarchical runqueues. Adjust task group management functions
where necessary.
The most changes are in alloc_fair_sched_group(), where we now need to
be a bit more careful during initialization.
Signed-off-by: Jan H
(), which returns the leader's CPU runqueue.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 35 +++
1 file changed, 35 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8cba7b8fb6bd..24d01bf8f796 100644
--- a/kernel/sched/fair.c
er-CPU runqueues.
The change in set_next_entity() just silences a warning. The code looks
bogus even without coscheduling, as the weight of an SE is independent
from the weight of the runqueue, when task groups are involved. It's
just for statistics anyway.
Signed-off-by: Jan H. Schönherr
---
kernel
.
Include some lockdep goodness, so that we detect incorrect usage.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 70 +
1 file changed, 70 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f72a72c8c3b8
er-CPU runqueues.
The change in set_next_entity() just silences a warning. The code looks
bogus even without coscheduling, as the weight of an SE is independent
from the weight of the runqueue, when task groups are involved. It's
just for statistics anyway.
Signed-off-by: Jan H. Schönherr
---
kernel
.
Include some lockdep goodness, so that we detect incorrect usage.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 70 +
1 file changed, 70 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f72a72c8c3b8
that only one of multiple CPUs has to walk up the
hierarchy.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 33 +
1 file changed, 33 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6d64f4478fda..0dc4d289497c 100644
we perform the actual
SD-SE weight adjustment via update_sdse_load().
At some point in the future (the code isn't there yet), this will
allow software combining, where not all CPUs have to walk up the
full hierarchy on enqueue/dequeue.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c |
) putting the current
task back.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 28 +++-
1 file changed, 23 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 2aa3a60dfca5..2227e4840355 100644
--- a/kernel/sched/fair.c
that only one of multiple CPUs has to walk up the
hierarchy.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 33 +
1 file changed, 33 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6d64f4478fda..0dc4d289497c 100644
we perform the actual
SD-SE weight adjustment via update_sdse_load().
At some point in the future (the code isn't there yet), this will
allow software combining, where not all CPUs have to walk up the
full hierarchy on enqueue/dequeue.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c |
) putting the current
task back.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 28 +++-
1 file changed, 23 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 2aa3a60dfca5..2227e4840355 100644
--- a/kernel/sched/fair.c
bumping the aggregated value.
(A nicer solution would be to apply only the actual difference to the
aggregate instead of doing full removal and a subsequent addition.)
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 15 +++
1 file changed, 15 insertions(+)
diff --git
bumping the aggregated value.
(A nicer solution would be to apply only the actual difference to the
aggregate instead of doing full removal and a subsequent addition.)
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 15 +++
1 file changed, 15 insertions(+)
diff --git
Move struct rq_flags around to keep future commits crisp.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/sched.h | 26 +-
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index b8c8dfd0e88d..cd3a32ce8fc6 100644
Signed-off-by: Jan H. Schönherr
---
kernel/sched/core.c| 2 ++
kernel/sched/cosched.c | 85 ++
kernel/sched/sched.h | 6
3 files changed, 93 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 48e37c3baed1..a235b6041
The function cfs_rq_util_change() notifies frequency governors of
utilization changes, so that they can be scheduler driven. This is
coupled to per CPU runqueue statistics. So, don't do anything
when called for non-CPU runqueues.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 11
Move struct rq_flags around to keep future commits crisp.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/sched.h | 26 +-
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index b8c8dfd0e88d..cd3a32ce8fc6 100644
Signed-off-by: Jan H. Schönherr
---
kernel/sched/core.c| 2 ++
kernel/sched/cosched.c | 85 ++
kernel/sched/sched.h | 6
3 files changed, 93 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 48e37c3baed1..a235b6041
The function cfs_rq_util_change() notifies frequency governors of
utilization changes, so that they can be scheduler driven. This is
coupled to per CPU runqueue statistics. So, don't do anything
when called for non-CPU runqueues.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 11
With coscheduling the number of required classes is twice the depth of
the scheduling domain hierarchy. For a 256 CPU system, there are eight
levels at most. Adjust the number of subclasses, so that lockdep can
still be used on such systems.
Signed-off-by: Jan H. Schönherr
---
include/linux
With coscheduling the number of required classes is twice the depth of
the scheduling domain hierarchy. For a 256 CPU system, there are eight
levels at most. Adjust the number of subclasses, so that lockdep can
still be used on such systems.
Signed-off-by: Jan H. Schönherr
---
include/linux
, where the sdrq->is_root fields do not yield
a consistent picture across a task group.
Handle these cases.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 68 +
1 file changed, 68 insertions(+)
diff --git a/kernel/sched/fair.
, where the sdrq->is_root fields do not yield
a consistent picture across a task group.
Handle these cases.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 68 +
1 file changed, 68 insertions(+)
diff --git a/kernel/sched/fair.
off-by: Jan H. Schönherr
---
kernel/sched/core.c | 5 ++---
kernel/sched/fair.c | 11 +--
kernel/sched/sched.h | 21 +
3 files changed, 28 insertions(+), 9 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 5350cab7ac4a..337bae6fa836 100
ent group to
create a new group.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 64 +++-
kernel/sched/sched.h | 31 +
2 files changed, 59 insertions(+), 36 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sc
, we need to handle additional idle cases, as CPUs
are now idle *within* certain coscheduled sets and woken tasks may
not preempt the idle task blindly anymore.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/fair.c | 85 +++--
1 file changed, 83
transparently to system level.
Signed-off-by: Jan H. Schönherr
---
kernel/sched/cosched.c | 32 +++-
1 file changed, 31 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/cosched.c b/kernel/sched/cosched.c
index eb6a6a61521e..a1f0d3a7b02a 100644
--- a/kernel/sched
r use case. This will change soon.
Also, move the structure definition below kernel/sched/. It is not used
outside and in the future it will carry some more internal types that
we don't want to expose.
Signed-off-by: Jan H. Schönherr
---
include/linux/sched/topology.h | 6 --
kernel/sch
off-by: Jan H. Schönherr
---
kernel/sched/core.c | 5 ++---
kernel/sched/fair.c | 11 +--
kernel/sched/sched.h | 21 +
3 files changed, 28 insertions(+), 9 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 5350cab7ac4a..337bae6fa836 100
1 - 100 of 294 matches
Mail list logo