Recently we had a discussion about cond_resched unconditionally
recording a voluntary context switch [1].
Lets add a comment clarifying that how this API is to be used.
[1]
https://lkml.kernel.org/r/1526027434-21237-1-git-send-email-byungchul.p...@lge.com
Signed-off-by: Joel Fernandes (Google
assign it at that time.
Just a clean up patch, no logical change.
Signed-off-by: Joel Fernandes (Google)
---
kernel/rcu/tree.c | 34 ++
1 file changed, 18 insertions(+), 16 deletions(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 9f5679ba413b
rcu_seq_snap may be tricky for someone looking at it for the first time.
Lets document how it works with an example to make it easier.
Signed-off-by: Joel Fernandes (Google)
---
kernel/rcu/rcu.h | 24 +++-
1 file changed, 23 insertions(+), 1 deletion(-)
diff --git a/kernel
Currently the tree RCU clean up code records a CleanupMore trace event
even if the GP was already in progress. This makes CleanupMore show up
twice for no reason. Avoid it.
Signed-off-by: Joel Fernandes (Google)
---
kernel/rcu/tree.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff
up patch, no logical change.
Signed-off-by: Joel Fernandes (Google)
---
include/trace/events/rcu.h | 15 ++--
kernel/rcu/tree.c | 47 ++
2 files changed, 35 insertions(+), 27 deletions(-)
diff --git a/include/trace/events/rcu.h b/include/trace
Commit be4b8beed87d ("rcu: Move RCU's grace-period-change code to ->gp_seq")
removed the cpuend grace period trace point. This patch adds it back.
Signed-off-by: Joel Fernandes (Google)
---
kernel/rcu/tree.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git
://patchwork.kernel.org/patch/10384261/
CC: Viresh Kumar
CC: Rafael J. Wysocki
CC: Peter Zijlstra
CC: Ingo Molnar
CC: Patrick Bellasi
CC: Juri Lelli
Cc: Luca Abeni
CC: Joel Fernandes
CC: linux...@vger.kernel.org
Signed-off-by: Joel Fernandes (Google)
---
Claudio,
Could you also test this patch for your usecase
Feng
Cc: Paul McKenney
Cc: Masami Hiramatsu
Cc: Todd Kjos
Cc: Erick Reyes
Cc: Julia Cartwright
Cc: kernel-t...@android.com
Signed-off-by: Joel Fernandes (Google)
---
tools/testing/selftests/ftrace/config | 3 +
.../test.d/preemptirq/irqsoff_tracer.tc | 74 +++
2
From: "Joel Fernandes (Google)"
Currently there is a chance of a schedutil cpufreq update request to be
dropped if there is a pending update request. This pending request can
be delayed if there is a scheduling delay of the irq_work and the wake
up of the schedutil governor kthread.
On Thu, Aug 17, 2017 at 6:25 PM, Byungchul Park wrote:
> On Mon, Aug 07, 2017 at 12:50:32PM +0900, Byungchul Park wrote:
>> When cpudl_find() returns any among free_cpus, the cpu might not be
>> closer than others, considering sched domain. For example:
>>
>>this_cpu: 15
>>free_cpus: 0,
Hi Byungchul,
On Thu, Aug 17, 2017 at 11:05 PM, Byungchul Park wrote:
> It would be better to avoid pushing tasks to other cpu within
> a SD_PREFER_SIBLING domain, instead, get more chances to check other
> siblings.
>
> Signed-off-by: Byungchul Park
> ---
> kernel/sched/deadline.c | 55
>
Hi Steve,
On Fri, Oct 6, 2017 at 11:07 AM, Steven Rostedt wrote:
> From: "Steven Rostedt (VMware)"
>
> The ftrace_mod_map is a descriptor to save module init function names in
> case they were traced, and the trace output needs to reference the function
> name from the function address. But
Hi Steve,
On Sat, Oct 7, 2017 at 6:32 AM, Steven Rostedt wrote:
> On Fri, 6 Oct 2017 23:41:25 -0700
> "Joel Fernandes (Google)" wrote:
>
>> Hi Steve,
>>
>> On Fri, Oct 6, 2017 at 11:07 AM, Steven Rostedt wrote:
>> > From: "Steven Rostedt (VMw
Hi Viresh,
On Wed, Jul 26, 2017 at 2:22 AM, Viresh Kumar wrote:
>
> With Android UI and benchmarks the latency of cpufreq response to
> certain scheduling events can become very critical. Currently, callbacks
> into schedutil are only made from the scheduler if the target CPU of the
> event is
Hi Viresh,
On Wed, Jul 26, 2017 at 2:22 AM, Viresh Kumar wrote:
> We do not call cpufreq callbacks from scheduler core for remote
> (non-local) CPUs currently. But there are cases where such remote
> callbacks are useful, specially in the case of shared cpufreq policies.
>
> This patch updates
On Wed, Jul 26, 2017 at 2:22 AM, Viresh Kumar wrote:
> This patch updates the schedutil governor to process cpufreq utilization
> update hooks called for remote CPUs where the remote CPU is managed by
> the cpufreq policy of the local CPU.
>
> Based on initial work from Steve Muckle.
>
>
On Wed, Jul 26, 2017 at 10:50 PM, Viresh Kumar wrote:
> On 26-07-17, 22:34, Joel Fernandes (Google) wrote:
>> On Wed, Jul 26, 2017 at 2:22 AM, Viresh Kumar
>> wrote:
>> > @@ -221,7 +226,7 @@ static void sugov_update_single(struct
>> >
Hi Viresh,
On Wed, Jul 26, 2017 at 10:46 PM, Viresh Kumar wrote:
> On 26-07-17, 22:14, Joel Fernandes (Google) wrote:
>> Also one more comment about this usecase:
>>
>> You mentioned in our discussion at [2] sometime back, about the
>> question of initial utilizatio
On Thu, Jul 27, 2017 at 12:14 AM, Viresh Kumar wrote:
> On 26-07-17, 23:13, Joel Fernandes (Google) wrote:
>> On Wed, Jul 26, 2017 at 10:50 PM, Viresh Kumar
>> wrote:
>> > On 26-07-17, 22:34, Joel Fernandes (Google) wrote:
>> >> On Wed, Jul 26, 2017 at
On Thu, Jul 27, 2017 at 12:21 AM, Juri Lelli wrote:
[..]
>> >
>> > But even without that, if you see the routine
>> > init_entity_runnable_average() in fair.c, the new tasks are
>> > initialized in a way that they are seen as heavy tasks. And so even
>> > for the first time they run, freq should
On Thu, Jul 27, 2017 at 12:55 PM, Saravana Kannan
wrote:
> On 07/26/2017 08:30 PM, Viresh Kumar wrote:
>>
>> On 26-07-17, 14:00, Saravana Kannan wrote:
>>>
>>> No, the alternative is to pass it on to the CPU freq driver and let it
>>> decide what it wants to do. That's the whole point if having a
On Fri, Apr 6, 2018 at 5:58 AM, Morten Rasmussen
wrote:
> On Thu, Apr 05, 2018 at 06:22:48PM +0200, Vincent Guittot wrote:
>> Hi Morten,
>>
>> On 5 April 2018 at 17:46, Morten Rasmussen wrote:
>> > On Wed, Apr 04, 2018 at 03:43:17PM +0200, Vincent Guittot wrote:
>> >> On 4 April 2018 at 12:44,
Hi,
Or maintain array of registered irqs and iterate over them only.
>>> Right, we can allocate a bitmap of used irqs to do that.
>>>
I have another idea.
perf record shows mutex_lock/mutex_unlock at the top.
Most of them are irq mutex not seqfile mutex as there are many
Hi Tom,
Nice series and nice ELC talk as well. Thanks.
On Mon, Jun 26, 2017 at 3:49 PM, Tom Zanussi
wrote:
> This patchset adds support for 'inter-event' quantities to the trace
> event subsystem. The most important example of inter-event quantities
> are latencies, or the time differences
On Mon, Jun 26, 2017 at 3:49 PM, Tom Zanussi
wrote:
> RINGBUF_TYPE_TIME_STAMP is defined but not used, and from what I can
> gather was reserved for something like an absolute timestamp feature
> for the ring buffer, if not a complete replacement of the current
> time_delta scheme.
>
> This code
changes for core scheduling
Joel Fernandes (Google) (3):
kselftest: Add tests for core-sched interface
Documentation: Add core scheduling documentation
sched: Debug bits...
Peter Zijlstra (1):
sched: CGroup tagging interface for core scheduling
.../admin-guide/hw-vuln/core-scheduling.rst | 263
Signed-off-by: Joel Fernandes (Google)
---
.../admin-guide/hw-vuln/core-scheduling.rst | 263 ++
Documentation/admin-guide/hw-vuln/index.rst | 1 +
2 files changed, 264 insertions(+)
create mode 100644 Documentation/admin-guide/hw-vuln/core-scheduling.rst
diff --git
Add a kselftest test to ensure that the core-sched interface is working
correctly.
Co-developed-by: Chris Hyser
Signed-off-by: Chris Hyser
Tested-by: Julien Desfossez
Reviewed-by: Josh Don
Signed-off-by: Josh Don
Signed-off-by: Chris Hyser
Signed-off-by: Joel Fernandes (Google)
---
tools
-by: Tim Chen
Signed-off-by: Vineeth Remanan Pillai
Signed-off-by: Joel Fernandes (Google)
---
kernel/sched/fair.c | 33 +---
kernel/sched/sched.h | 72
2 files changed, 101 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/fair.c b
for now that avoids
such complications.
Core scheduler has extra overhead. Enable it only for core with
more than one SMT hardware threads.
Co-developed-by: Josh Don
Co-developed-by: Chris Hyser
Co-developed-by: Joel Fernandes (Google)
Tested-by: Julien Desfossez
Signed-off-by: Tim Chen
Tested-by: Julien Desfossez
Not-Signed-off-by: Peter Zijlstra (Intel)
---
kernel/sched/core.c | 35 ++-
kernel/sched/fair.c | 9 +
2 files changed, 43 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index
)
*99.0th: 717 (7 samples)
99.5th: 725 (2 samples)
99.9th: 725 (0 samples)
Cc: Paul McKenney
Cc: Frederic Weisbecker
Suggested-by: Dietmar Eggeman
Co-developed-by: Qais Yousef
Signed-off-by: Qais Yousef
Signed-off-by: Joel Fernandes (Google)
---
kernel/sched/fair.c | 2
when ready cbs were present, to when the ready callbacks were
invoked by the rcuop thread. This also further confirms that there is no
need to raise the softirq for ready cbs in the first place.
Cc: neer...@codeaurora.org
Signed-off-by: Joel Fernandes (Google)
---
v1->v2: Also cleaned up anot
is rather incomplete.
This commit therefore continues the section by describing how RCU's
design handles CPU hotplug in a deadlock-free way.
Signed-off-by: Joel Fernandes (Google)
---
.../RCU/Design/Requirements/Requirements.rst | 30 +--
1 file changed, 28 insertions(+), 2
l E. McKenney
Signed-off-by: Joel Fernandes (Google)
---
kernel/rcu/tree.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 55d3700dd1e7..5efe0a98ea45 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -4119,7 +4119,9 @@ v
This memory barrier is not needed as rcu_segcblist_add_len() already
includes a memory barrier.
Signed-off-by: Joel Fernandes (Google)
---
kernel/rcu/tree.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 346a05506935..6c6d3c7036e6 100644
in the respective segment.
Signed-off-by: Joel Fernandes (Google)
---
include/trace/events/rcu.h | 25 +
kernel/rcu/rcu_segcblist.c | 31 +++
kernel/rcu/rcu_segcblist.h | 5 +
kernel/rcu/tree.c | 9 +
4 files changed, 70 insertions
Signed-off-by: Joel Fernandes (Google)
---
kernel/rcu/rcu_segcblist.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/rcu/rcu_segcblist.c b/kernel/rcu/rcu_segcblist.c
index 2dccbd29cd3a..271d5d9d7f60 100644
--- a/kernel/rcu/rcu_segcblist.c
+++ b/kernel/rcu/rcu_segcblist.c
@@ -10,7
Memory barriers are needed when updating the full length of the
segcblist, however it is not fully clearly why one is needed before and
after. This patch therefore adds additional comments to the function
header to explain it.
Signed-off-by: Joel Fernandes (Google)
---
kernel/rcu
ues.
Fixed minor nit from Davidlohr.
v1->v3: minor nits.
(https://lore.kernel.org/lkml/20200719034210.2382053-1-joel@xxxxx/)
Joel Fernandes (Google) (6):
rcu/tree: Make rcu_do_batch count how many callbacks were executed
rcu/segcblist: Add counters to segcblist datastructure
rcu/trac
related to using donecbs's ->len field as a
temporary variable to save the segmented callback list's length. This cannot be
done anymore and is not needed.
Signed-off-by: Joel Fernandes (Google)
---
include/linux/rcu_segcblist.h | 2 +
kernel/rcu/rcu_segcblist.c|
from 0 is confusing and error-prone IMHO.
This commit therefore explicitly counts how many callbacks were executed in
rcu_do_batch() itself, and uses that to update the per-CPU segcb list's ->len
field, without relying on the negativity of rcl->len.
Signed-off-by: Joel Fernandes (Googl
ues.
Fixed minor nit from Davidlohr.
v1->v3: minor nits.
(https://lore.kernel.org/lkml/20200719034210.2382053-1-j...@joelfernandes.org/)
Joel Fernandes (Google) (5):
rcu/tree: Make rcu_do_batch count how many callbacks were executed
rcu/segcblist: Add counters to segcblist datastructure
rc
Signed-off-by: Joel Fernandes (Google)
---
kernel/rcu/rcu_segcblist.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/rcu/rcu_segcblist.c b/kernel/rcu/rcu_segcblist.c
index 72b284f965aa..13f8f181521d 100644
--- a/kernel/rcu/rcu_segcblist.c
+++ b/kernel/rcu/rcu_segcblist.c
@@ -10,7
rcu_barrier() may skip queuing a callback and return too early. Fix it by
storing
state to indicate that callbacks are being invoked and the callback list should
not appear as non-empty. This is a terrible hack, however it still does not fix
TREE04.
Signed-off-by: Joel Fernandes (Google)
---
include
from 0 is confusing and error-prone IMHO.
This commit therefore explicitly counts have many callbacks were executed in
rcu_do_batch() itself, and uses that to update the per-CPU segcb list's ->len
field, without relying on the negativity of rcl->len.
Signed-off-by: Joel Fernandes (Google)
---
in the respective segment.
Signed-off-by: Joel Fernandes (Google)
---
include/trace/events/rcu.h | 25 +
kernel/rcu/rcu_segcblist.c | 34 ++
kernel/rcu/rcu_segcblist.h | 5 +
kernel/rcu/tree.c | 9 +
4 files changed, 73 insertions
related to using donecbs's ->len field as a
temporary variable to save the segmented callback list's length. This is not
needed any more.
Signed-off-by: Joel Fernandes (Google)
---
include/linux/rcu_segcblist.h | 2 +
kernel/rcu/rcu_segcblist.c|
information.
In the future, if needed we could add more options to make it possible to
force-enable coresched. But right now I don't see a need for that, till a
usecase arises.
Joel Fernandes (Google) (2):
x86/bugs: Disable coresched on hardware that does not need it
sched/debug: Add debug
Some hardware such as certain AMD variants don't have cross-HT MDS/L1TF
issues. Detect this and don't enable core scheduling as it can
needlessly slow the device done.
Signed-off-by: Joel Fernandes (Google)
---
arch/x86/kernel/cpu/bugs.c | 8
kernel/sched/core.c| 7
It is useful to see whether coresched is enabled or not, especially in
devices that don't need it. Add information about the same to
/proc/sched_debug.
Signed-off-by: Joel Fernandes (Google)
---
kernel/sched/debug.c | 4
1 file changed, 4 insertions(+)
diff --git a/kernel/sched/debug.c b
in the respective segment.
Reviewed-by: Frederic Weisbecker
Reviewed-by: Neeraj Upadhyay
Signed-off-by: Joel Fernandes (Google)
---
include/trace/events/rcu.h | 26 ++
kernel/rcu/tree.c | 9 +
2 files changed, 35 insertions(+)
diff --git a/include/trace/events/rcu.h
Memory barriers are needed when updating the full length of the
segcblist, however it is not fully clearly why one is needed before and
after. This patch therefore adds additional comments to the function
header to explain it.
Signed-off-by: Joel Fernandes (Google)
---
kernel/rcu
This memory barrier is not needed as rcu_segcblist_add_len() already
includes a memory barrier *before* the length of the list is updated.
Same reasoning for rcu_segcblist_enqueue().
Signed-off-by: Joel Fernandes (Google)
---
kernel/rcu/rcu_segcblist.c | 1 -
kernel/rcu/tree.c | 1
Signed-off-by: Joel Fernandes (Google)
---
kernel/rcu/rcu_segcblist.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/rcu/rcu_segcblist.c b/kernel/rcu/rcu_segcblist.c
index b0aaa51e0ee6..19ff82b805fb 100644
--- a/kernel/rcu/rcu_segcblist.c
+++ b/kernel/rcu/rcu_segcblist.c
@@ -10,7
ous changes, bug fixes. Discovery of rcu_barrier issue.
v4: Restructured rcu_do_batch() and segcblist merging to avoid issues.
Fixed minor nit from Davidlohr.
v1->v3: minor nits.
(https://lore.kernel.org/lkml/20200719034210.2382053-1-joel@xxxxx/)
Joel Fernandes (Google) (6):
rc
from 0 is confusing and error-prone IMHO.
This commit therefore explicitly counts how many callbacks were executed in
rcu_do_batch() itself, and uses that to update the per-CPU segcb list's ->len
field, without relying on the negativity of rcl->len.
Signed-off-by: Joel Fernandes (Googl
related to using donecbs's ->len field as a
temporary variable to save the segmented callback list's length. This cannot be
done anymore and is not needed.
Signed-off-by: Joel Fernandes (Google)
---
include/linux/rcu_segcblist.h | 1 +
kernel/rcu/rcu_segcblist.c|
in the respective segment.
Signed-off-by: Joel Fernandes (Google)
---
include/trace/events/rcu.h | 25 +
kernel/rcu/rcu_segcblist.c | 31 +++
kernel/rcu/rcu_segcblist.h | 5 +
kernel/rcu/tree.c | 9 +
4 files changed, 70 insertions
e
issues w.r.t process/taskgroup weights:
https://lwn.net/ml/linux-kernel/20200225034438.GA617271@z...
Aubrey Li (1):
sched: migration changes for core scheduling
Joel Fernandes (Google) (16):
sched/fair: Snapshot the min_vruntime of CPUs on force idle
sched: Enqueue task into core queue
From: Peter Zijlstra
In preparation of playing games with rq->lock, abstract the thing
using an accessor.
Tested-by: Julien Desfossez
Signed-off-by: Peter Zijlstra (Intel)
Signed-off-by: Vineeth Remanan Pillai
Signed-off-by: Julien Desfossez
Signed-off-by: Joel Fernandes (Goo
(Intel)
Signed-off-by: Vineeth Remanan Pillai
Signed-off-by: Julien Desfossez
Signed-off-by: Joel Fernandes (Google)
---
kernel/sched/deadline.c | 16 ++--
kernel/sched/fair.c | 32 +++-
kernel/sched/idle.c | 8
kernel/sched/rt.c
and that just duplicates a lot of
stuff for no raisin (the 2nd copy lives in the rt-mutex PI code).
Reviewed-by: Joel Fernandes (Google)
Tested-by: Julien Desfossez
Signed-off-by: Peter Zijlstra (Intel)
Signed-off-by: Vineeth Remanan Pillai
Signed-off-by: Julien Desfossez
Signed-off-by: Joel
OTE: This problem will be fixed differently in a later patch. It just
kept here for reference purposes about this issue, and to make
applying later patches easier.
Reported-by: Joel Fernandes (Google)
Signed-off-by: Peter Zijlstra
Signed-off-by: Joel Fernandes (Google)
---
kernel/sc
it by enqueuing into the core queue only after the class-specific
enqueue callback has been called. This ensures that for CFS tasks, the
updated vruntime value is used when enqueuing the task into the core
rbtree.
Reviewed-by: Vineeth Pillai
Signed-off-by: Joel Fernandes (Google)
---
kernel/sched
.
Tested-by: Julien Desfossez
Reviewed-by: Aubrey Li
Signed-off-by: Joel Fernandes (Google)
---
arch/x86/include/asm/thread_info.h | 2 ++
kernel/sched/sched.h | 6 ++
2 files changed, 8 insertions(+)
diff --git a/arch/x86/include/asm/thread_info.h
b/arch/x86/include/asm
-off-by: Joel Fernandes (Google)
---
drivers/gpu/drm/i915/i915_request.c | 4 ++--
include/linux/irq_work.h| 33 ++---
include/linux/irqflags.h| 4 ++--
kernel/bpf/stackmap.c | 2 +-
kernel/irq_work.c | 18
From: Peter Zijlstra
Introduce the basic infrastructure to have a core wide rq->lock.
Tested-by: Julien Desfossez
Signed-off-by: Peter Zijlstra (Intel)
Signed-off-by: Julien Desfossez
Signed-off-by: Vineeth Remanan Pillai
Signed-off-by: Joel Fernandes (Google)
---
kernel/Kconfig.pree
APIs and support.
Reviewed-by: Josh Don
Tested-by: Julien Desfossez
Signed-off-by: Joel Fernandes (Google)
---
include/linux/sched.h | 2 +
kernel/sched/core.c | 241 --
kernel/sched/debug.c | 4 +
3 files changed, 236 insertions(+), 11 deletions
From: Vineeth Pillai
Similar to how user to kernel mode transitions are protected in earlier
patches, protect the entry into kernel from guest mode as well.
Tested-by: Julien Desfossez
Reviewed-by: Joel Fernandes (Google)
Reviewed-by: Alexandre Chartre
Signed-off-by: Vineeth Pillai
Signed
of existing core cookies so that multiple
tasks may share the same core_cookie.
This will be especially useful in the next patch, where the concept of
cookie color is introduced.
Reviewed-by: Joel Fernandes (Google)
Signed-off-by: Josh Don
Signed-off-by: Joel Fernandes (Google)
---
kernel
the camera streaming frame rate by ~3%.
Tested-by: Julien Desfossez
Reviewed-by: Aubrey Li
Co-developed-by: Chris Hyser
Signed-off-by: Chris Hyser
Signed-off-by: Joel Fernandes (Google)
---
include/linux/sched.h| 1 +
include/uapi/linux/prctl.h | 3 ++
kernel/sched/core.c
-by: Tim Chen
Signed-off-by: Vineeth Remanan Pillai
Signed-off-by: Joel Fernandes (Google)
---
kernel/sched/fair.c | 64
kernel/sched/sched.h | 29
2 files changed, 88 insertions(+), 5 deletions(-)
diff --git a/kernel/sched/fair.c b
From: Peter Zijlstra
Tested-by: Julien Desfossez
Signed-off-by: Peter Zijlstra (Intel)
Signed-off-by: Joel Fernandes (Google)
---
kernel/sched/fair.c | 11 +--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 51483a00a755
This will be used by kselftest to verify the CGroup cookie value that is
set by the CGroup interface.
Reviewed-by: Josh Don
Tested-by: Julien Desfossez
Signed-off-by: Joel Fernandes (Google)
---
kernel/sched/core.c | 31 +++
1 file changed, 31 insertions(+)
diff
From: Peter Zijlstra
Instead of only selecting a local task, select a task for all SMT
siblings for every reschedule on the core (irrespective which logical
CPU does the reschedule).
Tested-by: Julien Desfossez
Reviewed-by: Joel Fernandes (Google)
Signed-off-by: Peter Zijlstra (Intel)
Signed
-by: Vineeth Pillai
Signed-off-by: Vineeth Pillai
Signed-off-by: Joel Fernandes (Google)
---
.../admin-guide/kernel-parameters.txt | 11 +
include/linux/entry-common.h | 12 +-
include/linux/sched.h | 12 +
kernel/entry/common.c
Fernandes (Google)
---
include/linux/sched.h | 3 +++
kernel/fork.c | 1 +
kernel/sched/core.c | 8
3 files changed, 12 insertions(+)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 79d76c78cc8e..6fbdb1a204bf 100644
--- a/include/linux/sched.h
+++ b/include/linux
in case anyone reports an issue with it. Testing
shows it to be working for me.
Reviewed-by: Vineeth Pillai
Suggested-by: Peter Zijlstra (Intel)
Signed-off-by: Joel Fernandes (Google)
---
kernel/sched/core.c | 73 -
1 file changed, 26 insertions
.
Reviewed-by: Joel Fernandes (Google)
Signed-off-by: Josh Don
Signed-off-by: Joel Fernandes (Google)
---
include/linux/sched.h | 1 +
kernel/sched/core.c | 120 +++---
kernel/sched/sched.h | 2 +
3 files changed, 103 insertions(+), 20 deletions(-)
diff
Add a kselftest test to ensure that the core-sched interface is working
correctly.
Tested-by: Julien Desfossez
Reviewed-by: Josh Don
Signed-off-by: Joel Fernandes (Google)
---
tools/testing/selftests/sched/.gitignore | 1 +
tools/testing/selftests/sched/Makefile| 14 +
tools
ime.
Suggested-by: Vineeth Remanan Pillai
Signed-off-by: Peter Zijlstra
Signed-off-by: Joel Fernandes (Google)
---
kernel/sched/fair.c | 11 +++
1 file changed, 3 insertions(+), 8 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 12cf068eeec8..51483a00a755 100
Desfossez
Signed-off-by: Peter Zijlstra (Intel)
Signed-off-by: Joel Fernandes (Google)
---
include/linux/sched.h | 1 +
kernel/sched/core.c | 130 +-
kernel/sched/idle.c | 1 +
kernel/sched/sched.h | 6 ++
4 files changed, 137 insertions(+), 1
-by: Julien Desfossez
Reviewed-by: Chris Hyser
Signed-off-by: Chris Hyser
Signed-off-by: Joel Fernandes (Google)
---
kernel/sched/Makefile | 1 +
kernel/sched/core.c| 809 +---
kernel/sched/coretag.c | 819 +
kernel
Document the usecases, design and interfaces for core scheduling.
Co-developed-by: Vineeth Pillai
Signed-off-by: Vineeth Pillai
Tested-by: Julien Desfossez
Reviewed-by: Randy Dunlap
Signed-off-by: Joel Fernandes (Google)
---
.../admin-guide/hw-vuln/core-scheduling.rst | 330
Tested-by: Julien Desfossez
Not-Signed-off-by: Peter Zijlstra (Intel)
---
kernel/sched/core.c | 35 ++-
kernel/sched/fair.c | 9 +
2 files changed, 43 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index
Fernandes (Google)
---
.../admin-guide/kernel-parameters.txt | 14 ++
arch/x86/kernel/cpu/bugs.c| 19
include/linux/cpu.h | 1 +
include/linux/sched/smt.h | 4 ++
kernel/cpu.c
Pillai
Signed-off-by: Joel Fernandes (Google)
---
kernel/sched/core.c | 183 +--
kernel/sched/sched.h | 4 +
2 files changed, 181 insertions(+), 6 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 7f807a84cc30..b99a7493d590
.
Reviewed-by: Vineeth Pillai
Suggested-by: Peter Zijlstra (Intel)
Signed-off-by: Joel Fernandes (Google)
---
kernel/sched/core.c | 61 +
kernel/sched/fair.c | 80
kernel/sched/sched.h | 7 +++-
3 files changed, 97
to schedule.
Tested-by: Julien Desfossez
Reviewed-by: Joel Fernandes (Google)
Signed-off-by: Vineeth Pillai
Signed-off-by: Julien Desfossez
Signed-off-by: Joel Fernandes (Google)
---
kernel/sched/core.c | 15 ---
kernel/sched/fair.c | 40
Add a generic_idle_{enter,exit} helper function to enter and exit kernel
protection when entering and exiting idle, respectively.
While at it, remove a stale RCU comment.
Reviewed-by: Alexandre Chartre
Tested-by: Julien Desfossez
Signed-off-by: Joel Fernandes (Google)
---
include/linux/entry
-off-by: Joel Fernandes (Google)
---
kernel/sched/core.c | 33 -
kernel/sched/fair.c | 40
kernel/sched/sched.h | 5 +
3 files changed, 65 insertions(+), 13 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched
After rcu_do_batch(), add a check for whether the seglen counts went to
zero if the list was indeed empty.
Signed-off-by: Joel Fernandes (Google)
---
kernel/rcu/rcu_segcblist.c | 12
kernel/rcu/rcu_segcblist.h | 3 +++
kernel/rcu/tree.c | 1 +
3 files changed, 16
After rcu_do_batch(), add a check for whether the seglen counts went to
zero if the list was indeed empty.
Signed-off-by: Joel Fernandes (Google)
---
v1->v2: Added more debug checks.
kernel/rcu/rcu_segcblist.c | 12
kernel/rcu/rcu_segcblist.h | 3 +++
kernel/rcu/tre
by mm_struct.
o Keep overhead low by checking if tracing is enabled.
o Add some noise reduction and lower overhead by emitting only on
threshold changes.
Co-developed-by: Tim Murray
Signed-off-by: Tim Murray
Signed-off-by: Joel Fernandes (Google)
---
Cc: carmenjack...@google.com
Cc: mayankgu
Fernandes (Google)
---
v1->v2: Added more commit message.
Cc: carmenjack...@google.com
Cc: mayankgu...@google.com
Cc: dan...@google.com
Cc: rost...@goodmis.org
Cc: minc...@kernel.org
Cc: a...@linux-foundation.org
Cc: kernel-t...@android.com
include/linux/mm.h |
do cheaper
comparisons with zero instead for the code that keeps the tick on in
rcu_nmi_enter_common().
In the next patch, both of the concerns of (2) will be addressed and
then we can get rid of dynticks_nmi_nesting, however one step at a time.
Signed-off-by: Joel Fernandes (Google)
---
kernel/rcu/r
ail.com
Cc: byungchul.p...@lge.com
Cc: kernel-t...@android.com
Cc: kernel-t...@lge.com
Co-developed-by: Byungchul Park
Signed-off-by: Byungchul Park
Signed-off-by: Joel Fernandes (Google)
---
kernel/rcu/tree.c | 198 --
1 file changed, 193 insert
Fernandes (Google)
---
kernel/rcu/rcuperf.c | 169 ++-
1 file changed, 168 insertions(+), 1 deletion(-)
diff --git a/kernel/rcu/rcuperf.c b/kernel/rcu/rcuperf.c
index 7a6890b23c5f..34658760da5e 100644
--- a/kernel/rcu/rcuperf.c
+++ b/kernel/rcu/rcuperf.c
bpf file to /sys/kernel/debug/tracing/events/X/Y/bpf
The following commands can be written into it:
attach: Attaches BPF prog fd to tracepoint
detach: Detaches BPF prog fd to tracepoint
Reading the bpf file will show all the attached programs to the tracepoint.
Joel Fernandes (Google) (4
101 - 200 of 555 matches
Mail list logo