From: "Joel Fernandes (Google)"
When an unsafe region is entered on an HT, an IPI needs to be sent to
siblings to ensure they enter the kernel.
Following are the reasons why we would like to use irq_work to implement
forcing of sibling into kernel mode:
1. Existing smp_call infrastructure
From: Aaron Lu
Add a wrapper function cfs_rq_min_vruntime(cfs_rq) to
return cfs_rq->min_vruntime.
It will be used in the following patch, no functionality
change.
Signed-off-by: Aaron Lu
---
kernel/sched/fair.c | 27 ---
1 file changed, 16 insertions(+), 11
to schedule.
Signed-off-by: Vineeth Remanan Pillai
Signed-off-by: Julien Desfossez
---
kernel/sched/fair.c | 39 +++
1 file changed, 39 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 285002a2f641..409edc736297 100644
--- a/kernel/sched
. This can confuse the logic. Add a retry logic
if smt_mask changes between the loops.
Signed-off-by: Peter Zijlstra (Intel)
Signed-off-by: Julien Desfossez
Signed-off-by: Vineeth Remanan Pillai
Signed-off-by: Joel Fernandes (Google)
Signed-off-by: Aaron Lu
Signed-off-by: Tim Chen
Signed-off
From: "Joel Fernandes (Google)"
Currently only RCU hooks for idle entry/exit are called. In later
patches, kernel-entry protection functionality will be added.
Signed-off-by: Joel Fernandes (Google)
---
include/linux/entry-common.h | 16
kernel/sched/idle.c | 17
From: Aubrey Li
- Don't migrate if there is a cookie mismatch
Load balance tries to move task from busiest CPU to the
destination CPU. When core scheduling is enabled, if the
task's cookie does not match with the destination CPU's
core cookie, this task will be skipped by
From: "Joel Fernandes (Google)"
Add a new TIF flag to indicate whether the kernel needs to be careful
and take additional steps to mitigate micro-architectural issues during
entry into user or guest mode.
This new flag will be used by the series to determine if waiting is
needed or not, during
attackers. There is no possible mitigation involving flushing
of buffers to avoid this since the execution of attacker and victims
happen concurrently on 2 or more HTs.
Cc: Julien Desfossez
Cc: Tim Chen
Cc: Aaron Lu
Cc: Aubrey Li
Cc: Tim Chen
Cc: Paul E. McKenney
Co-developed-by: Vineeth Pillai
From: "Joel Fernandes (Google)"
Co-developed-by: Vineeth Pillai
Signed-off-by: Joel Fernandes (Google)
Signed-off-by: Vineeth Pillai
---
.../admin-guide/hw-vuln/core-scheduling.rst | 253 ++
Documentation/admin-guide/hw-vuln/index.rst | 1 +
2 files changed, 254
From: Aaron Lu
This patch provides a vruntime based way to compare two cfs task's
priority, be it on the same cpu or different threads of the same core.
When the two tasks are on the same CPU, we just need to find a common
cfs_rq both sched_entities are on and then do the comparison.
When the
From: "Joel Fernandes (Google)"
Make use of the generic_idle_{enter,exit} helper function added in
earlier patches to enter and exit kernel protection.
On exiting idle, protection will be reenabled.
Signed-off-by: Joel Fernandes (Google)
---
include/linux/entry-common.h | 6 ++
1 file
From: Peter Zijlstra
Not-Signed-off-by: Peter Zijlstra (Intel)
---
kernel/sched/core.c | 40 +++-
1 file changed, 39 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 5f77e575bbac..def25fe5e0d4 100644
---
From: Vineeth Pillai
Similar to how user to kernel mode transitions are protected in earlier
patches, protect the entry into kernel from guest mode as well.
Signed-off-by: Vineeth Pillai
---
arch/x86/kvm/x86.c| 3 +++
include/linux/entry-kvm.h | 12
kernel/entry/kvm.c
From: Vineeth Pillai
There are use cases where the kernel protection is not needed. One
example could be about using core scheduling for non-security related
use cases - isolate core for a particular process dynamically. Also,
to test/benchmark the overhead of kernel protection.
Have a compile
for now that avoids
such complications.
Core scheduler has extra overhead. Enable it only for core with
more than one SMT hardware threads.
Signed-off-by: Tim Chen
Signed-off-by: Peter Zijlstra (Intel)
Signed-off-by: Julien Desfossez
Signed-off-by: Vineeth Remanan Pillai
---
kernel/sched
From: Peter Zijlstra
When a sibling is forced-idle to match the core-cookie; search for
matching tasks to fill the core.
rcu_read_unlock() can incur an infrequent deadlock in
sched_core_balance(). Fix this by using the RCU-sched flavor instead.
Signed-off-by: Peter Zijlstra (Intel)
From: Vineeth Pillai
Hotplug fixes to core-scheduling require a new cpumask iterator
which iterates through all online cpus in both the given cpumasks.
This patch introduces it.
Signed-off-by: Vineeth Pillai
Signed-off-by: Joel Fernandes (Google)
---
include/linux/cpumask.h | 42
From: Peter Zijlstra
Introduce the basic infrastructure to have a core wide rq->lock.
Signed-off-by: Peter Zijlstra (Intel)
Signed-off-by: Julien Desfossez
Signed-off-by: Vineeth Remanan Pillai
---
kernel/Kconfig.preempt | 6 +++
kernel/sched/core.c|
and that just duplicates a lot of
stuff for no raisin (the 2nd copy lives in the rt-mutex PI code).
Signed-off-by: Peter Zijlstra (Intel)
Signed-off-by: Vineeth Remanan Pillai
Signed-off-by: Julien Desfossez
---
include/linux/sched.h | 8 ++-
kernel/sched/core.c | 146
Remanan Pillai
Signed-off-by: Julien Desfossez
---
kernel/sched/deadline.c | 16 ++--
kernel/sched/fair.c | 36 +---
kernel/sched/idle.c | 8
kernel/sched/rt.c| 14 --
kernel/sched/sched.h | 3 +++
kernel
From: Peter Zijlstra
In preparation of playing games with rq->lock, abstract the thing
using an accessor.
Signed-off-by: Peter Zijlstra (Intel)
Signed-off-by: Vineeth Remanan Pillai
Signed-off-by: Julien Desfossez
---
kernel/sched/core.c | 46 +-
kernel/sched/cpuacc
From: Vineeth Pillai
Hotplug fixes to core-scheduling require a new bitops API.
Introduce a new API find_next_or_bit() which returns the
bit number of the next set bit in OR-ed bit masks of the
given bit masks.
Signed-off-by: Vineeth Pillai
Signed-off-by: Joel Fernandes (Google)
---
From: Peter Zijlstra
Signed-off-by: Peter Zijlstra (Intel)
---
kernel/sched/fair.c | 12 ++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1a1bf726264a..af8c40191a19 100644
--- a/kernel/sched/fair.c
+++
Lu
Changes in v3
-
- Fixes the issue of sibling picking up an incompatible task
- Aaron Lu
- Vineeth Pillai
- Julien Desfossez
- Fixes the issue of starving threads due to forced idle
- Peter Zijlstra
- Fixes the refcounting issue when deleting a cgroup with tag
- Julien Desfo
> I've made an attempt in the following two patches to address
> the load balancing of mismatched load between the siblings.
>
> It is applied on top of Aaron's patches:
> - sched: Fix incorrect rq tagged as forced idle
> - wrapper for cfs_rq->min_vruntime
>
On 29-Aug-2019 04:38:21 PM, Peter Zijlstra wrote:
> On Thu, Aug 29, 2019 at 10:30:51AM -0400, Phil Auld wrote:
> > I think, though, that you were basically agreeing with me that the current
> > core scheduler does not close the holes, or am I reading that wrong.
>
> Agreed; the missing bits for
> 1) Unfairness between the sibling threads
> -
> One sibling thread could be suppressing and force idling
> the sibling thread over proportionally. Resulting in
> the force idled CPU not getting run and stall tasks on
> suppressed CPU.
>
> Status:
> i)
We tested both Aaron's and Tim's patches and here are our results.
Test setup:
- 2 1-thread sysbench, one running the cpu benchmark, the other one the
mem benchmark
- both started at the same time
- both are pinned on the same core (2 hardware threads)
- 10 30-seconds runs
- test script:
On 25-Jul-2019 10:30:03 PM, Aaron Lu wrote:
>
> I tried a different approach based on vruntime with 3 patches following.
[...]
We have experimented with this new patchset and indeed the fairness is
now much better. Interactive tasks with v3 were complete starving when
there were cpu-intensive
On 17-Jun-2019 10:51:27 AM, Aubrey Li wrote:
> The result looks still unfair, and particularly, the variance is too high,
I just want to confirm that I am also seeing the same issue with a
similar setup. I also tried with the priority boost fix we previously
posted, the results are slightly
On 12-Jun-2019 05:03:08 PM, Subhra Mazumdar wrote:
>
> On 6/12/19 9:33 AM, Julien Desfossez wrote:
> >After reading more traces and trying to understand why only untagged
> >tasks are starving when there are cpu-intensive tasks running on the
> >same set of CPUs,
After reading more traces and trying to understand why only untagged
tasks are starving when there are cpu-intensive tasks running on the
same set of CPUs, we noticed a difference in behavior in ‘pick_task’. In
the case where ‘core_cookie’ is 0, we are supposed to only prefer the
tagged task if
> The data on my side looks good with CORESCHED_STALL_FIX = true.
Thank you for testing this fix, I'm glad it works for this use-case as
well.
We will be posting another (simpler) version today, stay tuned :-)
Julien
On 31-May-2019 05:08:16 PM, Julien Desfossez wrote:
> > My first reaction is: when shell wakes up from sleep, it will
> > fork date. If the script is untagged and those workloads are
> > tagged and all available cores are already running workload
> > threads,
> My first reaction is: when shell wakes up from sleep, it will
> fork date. If the script is untagged and those workloads are
> tagged and all available cores are already running workload
> threads, the forked date can lose to the running workload
> threads due to __prio_less() can't properly do
On 30-May-2019 10:04:39 PM, Aubrey Li wrote:
> On Thu, May 30, 2019 at 4:36 AM Vineeth Remanan Pillai
> wrote:
> >
> > Third iteration of the Core-Scheduling feature.
> >
> > This version fixes mostly correctness related issues in v2 and
> > addresses performance issues. Also, addressed some
On 23-Apr-2019 04:18:17 PM, Vineeth Remanan Pillai wrote:
> From: Peter Zijlstra (Intel)
>
> Marks all tasks in a cgroup as matching for core-scheduling.
>
> Signed-off-by: Peter Zijlstra (Intel)
> ---
> kernel/sched/core.c | 62
>
On 08-May-2019 10:30:09 AM, Aaron Lu wrote:
> On Mon, May 06, 2019 at 03:39:37PM -0400, Julien Desfossez wrote:
> > On 29-Apr-2019 11:53:21 AM, Aaron Lu wrote:
> > > This is what I have used to make sure no two unmatched tasks being
> > > scheduled on the same core: (
On 29-Apr-2019 11:53:21 AM, Aaron Lu wrote:
> On Tue, Apr 23, 2019 at 06:45:27PM +, Vineeth Remanan Pillai wrote:
> > >> - Processes with different tags can still share the core
> >
> > > I may have missed something... Could you explain this statement?
> >
> > > This, to me, is the whole
ht be idling even though it
> > had something to run (because the sibling selected idle to match the
> > tagged process in previous tag matching iteration). We need to wake up
> > the sibling if such a situation arise.
> >
> > Signed-off-by: Vineeth Remanan Pillai
&g
On 23-Apr-2019 04:18:05 PM, Vineeth Remanan Pillai wrote:
> Second iteration of the core-scheduling feature.
>
> This version fixes apparent bugs and performance issues in v1. This
> doesn't fully address the issue of core sharing between processes
> with different tags. Core sharing still
On 24-Apr-2019 09:13:10 PM, Aubrey Li wrote:
> On Wed, Apr 24, 2019 at 12:18 AM Vineeth Remanan Pillai
> wrote:
> >
> > Second iteration of the core-scheduling feature.
> >
> > This version fixes apparent bugs and performance issues in v1. This
> > doesn't fully address the issue of core sharing
On 10-Apr-2019 10:06:30 AM, Peter Zijlstra wrote:
> while you're all having fun playing with this, I've not yet had answers
> to the important questions of how L1TF complete we want to be and if all
> this crud actually matters one way or the other.
>
> Also, I still don't see this stuff working
We found the source of the major performance regression we discussed
previously. It turns out there was a pattern where a task (a kworker in this
case) could be woken up, but the core could still end up idle before that
task had a chance to run.
Example sequence, cpu0 and cpu1 and siblings on the
> >>>Is the core wide lock primarily responsible for the regression? I ran
> >>>upto patch
> >>>12 which also has the core wide lock for tagged cgroups and also calls
> >>>newidle_balance() from pick_next_task(). I don't see any regression.
> >>>Of
> >>>course
> >>>the core sched version of
On Fri, Mar 22, 2019 at 8:09 PM Subhra Mazumdar
wrote:
> Is the core wide lock primarily responsible for the regression? I ran
> upto patch
> 12 which also has the core wide lock for tagged cgroups and also calls
> newidle_balance() from pick_next_task(). I don't see any regression. Of
> course
On Fri, Mar 22, 2019 at 9:34 AM Peter Zijlstra wrote:
> On Thu, Mar 21, 2019 at 05:20:17PM -0400, Julien Desfossez wrote:
> > On further investigation, we could see that the contention is mostly in
> the
> > way rq locks are taken. With this patchset, we lock the whole core if
On Tue, Mar 19, 2019 at 10:31 PM Subhra Mazumdar
wrote:
> On 3/18/19 8:41 AM, Julien Desfossez wrote:
> > The case where we try to acquire the lock on 2 runqueues belonging to 2
> > different cores requires the rq_lockp wrapper as well otherwise we
> >
The case where we try to acquire the lock on 2 runqueues belonging to 2
different cores requires the rq_lockp wrapper as well otherwise we
frequently deadlock in there.
This fixes the crash reported in
1552577311-8218-1-git-send-email-jdesfos...@digitalocean.com
diff --git a/kernel/sched/sched.h
On 2/18/19 8:56 AM, Peter Zijlstra wrote:
> A much 'demanded' feature: core-scheduling :-(
>
> I still hate it with a passion, and that is part of why it took a little
> longer than 'promised'.
>
> While this one doesn't have all the 'features' of the previous (never
> published) version and isn't
. See the wiki for the details.
We are still accepting discussion topic proposals.
Thanks,
Julien Desfossez & Mathieu Desnoyers
. See the wiki for the details.
We are still accepting discussion topic proposals.
Thanks,
Julien Desfossez & Mathieu Desnoyers
sted in sponsoring this event or next year's Tracing Summit.
See the Tracing Summit 2017 wiki at
http://www.tracingsummit.org/wiki/TracingSummit2017 for details.
Thank you,
On behalf of the Diagnostic and Monitoring Workgroup,
Julien Desfossez & Mathieu Desnoyers
sted in sponsoring this event or next year's Tracing Summit.
See the Tracing Summit 2017 wiki at
http://www.tracingsummit.org/wiki/TracingSummit2017 for details.
Thank you,
On behalf of the Diagnostic and Monitoring Workgroup,
Julien Desfossez & Mathieu Desnoyers
<rost...@goodmis.org>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Daniel Bristot de Oliveira <bris...@redhat.com>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Signed-off-by: Julien Desfossez <jdesfos...@efficios
.
This approach is then illustrated with the sched_switch tracepoint, where we
provide a way to output different fields based on the scheduling class of the
next task.
If accepted, this method could be used everywhere the "prio" field is currently
exposed to user-space through trace
t de Oliveira <bris...@redhat.com>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Signed-off-by: Julien Desfossez <jdesfos...@efficios.com>
---
include/trace/events/sched.h | 192 +++
1 file changed, 192 insertions(+)
diff --git a/
Molnar
Cc: Daniel Bristot de Oliveira
Reviewed-by: Mathieu Desnoyers
Signed-off-by: Julien Desfossez
---
include/linux/trace_events.h | 14 -
include/linux/tracepoint.h | 6 ++
include/trace/define_trace.h | 6 ++
include/trace/perf.h | 24 ++--
include/trace
.
This approach is then illustrated with the sched_switch tracepoint, where we
provide a way to output different fields based on the scheduling class of the
next task.
If accepted, this method could be used everywhere the "prio" field is currently
exposed to user-space through trace
l_runtime=1000 next_dl_deadline=3000
next_dl_period=3000
Cc: Peter Zijlstra
Cc: Steven Rostedt (Red Hat)
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Daniel Bristot de Oliveira
Reviewed-by: Mathieu Desnoyers
Signed-off-by: Julien Desfossez
---
include/trace/events/sche
hieu.desnoy...@efficios.com>
Signed-off-by: Julien Desfossez <jdesfos...@efficios.com>
---
include/trace/events/sched.h | 96
1 file changed, 96 insertions(+)
diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
index 6880682
teven Rostedt (Red Hat) <rost...@goodmis.org>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: Ingo Molnar <mi...@redhat.com>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Signed-off-by: Julien Desfossez <jdesfos...@effi
licitly
changed.
Changes from v1:
- Add a cover letter
- Fix the signed-off-by chain
- Remove an effect-less fix that was proposed
- Move the effective_policy/rt_prio helpers to sched/core.c
- Reorder the patchset so that the new TP sched_update_prio is the last one
Julien Desfossez (5):
,
new_dl_runtime=0, new_dl_deadline=0, new_dl_period=0
Cc: Peter Zijlstra
Cc: Steven Rostedt (Red Hat)
Cc: Thomas Gleixner
Cc: Ingo Molnar
Reviewed-by: Mathieu Desnoyers
Signed-off-by: Julien Desfossez
---
include/trace/events/sched.h | 96
Thomas Gleixner
Cc: Ingo Molnar
Reviewed-by: Mathieu Desnoyers
Signed-off-by: Julien Desfossez
---
include/trace/events/sched.h | 222 +++
1 file changed, 222 insertions(+)
diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
index
licitly
changed.
Changes from v1:
- Add a cover letter
- Fix the signed-off-by chain
- Remove an effect-less fix that was proposed
- Move the effective_policy/rt_prio helpers to sched/core.c
- Reorder the patchset so that the new TP sched_update_prio is the last one
Julien Desfossez (5):
c: Ingo Molnar <mi...@redhat.com>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Signed-off-by: Julien Desfossez <jdesfos...@efficios.com>
---
include/trace/events/sched.h | 68
kernel/sched/core.c | 3 ++
2 f
Gleixner <t...@linutronix.de>
Cc: Ingo Molnar <mi...@redhat.com>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Signed-off-by: Julien Desfossez <jdesfos...@efficios.com>
---
include/linux/trace_events.h | 14 -
include/linux/tracepoint.h | 11 +++
ed Hat) <rost...@goodmis.org>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: Ingo Molnar <mi...@redhat.com>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Signed-off-by: Julien Desfossez <jdesfos...@efficios.com>
---
include/linux/sched.h
: comm=b pid=2110, policy=SCHED_DEADLINE, nice=0,
rt_priority=0, dl_runtime=1000, dl_deadline=3000,
dl_period=3000
Cc: Peter Zijlstra
Cc: Steven Rostedt (Red Hat)
Cc: Thomas Gleixner
Cc: Ingo Molnar
Reviewed-by: Mathieu Desnoyers
Signed-off-by: Julien Desfossez
esnoyers
Signed-off-by: Julien Desfossez
---
include/linux/trace_events.h | 14 -
include/linux/tracepoint.h | 11 +-
include/trace/define_trace.h | 4
include/trace/perf.h | 7 +++
include/trace/trace_events.h | 50 ++
: Ingo Molnar
Reviewed-by: Mathieu Desnoyers
Signed-off-by: Julien Desfossez
---
include/linux/sched.h | 2 ++
kernel/sched/core.c | 36
2 files changed, 38 insertions(+)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index af39baf..0c03595
only fix of the patchset, the other patches aim to extract
accurate scheduling informations in the trace.
> > >> Cc: Peter Zijlstra <pet...@infradead.org>
> > >> Cc: Steven Rostedt (Red Hat) <rost...@goodmis.org>
> > >> Cc: Thomas Gleixner <t...@li
releasing the lock. In
>that case it falls back to its original class/priority/bandwidth.
>
> Hope that helps.
Thanks for clarifying that, so indeed there is no risk of ambiguity here
between the scheduling class and the policy for fair tasks so this patch
is useless.
This
hieu.desnoy...@efficios.com>
Signed-off-by: Julien Desfossez <jdesfos...@efficios.com>
---
include/trace/events/sched.h | 96
1 file changed, 96 insertions(+)
diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
index 11b3358
Gleixner <t...@linutronix.de>
Cc: Ingo Molnar <mi...@redhat.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Signed-off-by: Julien Desfossez <jdesfos...@efficios.com>
---
include/linux/trace_events.h | 14 -
include/linux/tracepoin
c: Ingo Molnar <mi...@redhat.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Signed-off-by: Julien Desfossez <jdesfos...@efficios.com>
---
include/trace/events/sched.h | 68
kernel/sched/core.c | 3 ++
2 f
esnoyers
Signed-off-by: Julien Desfossez
---
include/linux/trace_events.h | 14 -
include/linux/tracepoint.h | 11 +-
include/trace/define_trace.h | 4
include/trace/perf.h | 7 +++
include/trace/trace_events.h | 50 ++
: comm=b pid=2110, policy=SCHED_DEADLINE, nice=0,
rt_priority=0, dl_runtime=1000, dl_deadline=3000,
dl_period=3000
Cc: Peter Zijlstra
Cc: Steven Rostedt (Red Hat)
Cc: Thomas Gleixner
Cc: Ingo Molnar
Signed-off-by: Mathieu Desnoyers
Signed-off-by: Julien Desfossez
,
new_dl_runtime=0, new_dl_deadline=0, new_dl_period=0
Cc: Peter Zijlstra
Cc: Steven Rostedt (Red Hat)
Cc: Thomas Gleixner
Cc: Ingo Molnar
Signed-off-by: Mathieu Desnoyers
Signed-off-by: Julien Desfossez
---
include/trace/events/sched.h | 96
teven Rostedt (Red Hat) <rost...@goodmis.org>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: Ingo Molnar <mi...@redhat.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Signed-off-by: Julien Desfossez <jdesfos...@effi
g the following tick before preempting
the current task.
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Steven Rostedt (Red Hat) <rost...@goodmis.org>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: Ingo Molnar <mi...@redhat.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy..
Thomas Gleixner
Cc: Ingo Molnar
Signed-off-by: Mathieu Desnoyers
Signed-off-by: Julien Desfossez
---
include/trace/events/sched.h | 222 +++
1 file changed, 222 insertions(+)
diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
index
g the following tick before preempting
the current task.
Cc: Peter Zijlstra
Cc: Steven Rostedt (Red Hat)
Cc: Thomas Gleixner
Cc: Ingo Molnar
Signed-off-by: Mathieu Desnoyers
Signed-off-by: Julien Desfossez
---
kernel/sched/fair.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff
ed Hat) <rost...@goodmis.org>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: Ingo Molnar <mi...@redhat.com>
Signed-off-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Signed-off-by: Julien Desfossez <jdesfos...@efficios.com>
---
include/linux/sched/rt.h | 10 +
: Ingo Molnar
Signed-off-by: Mathieu Desnoyers
Signed-off-by: Julien Desfossez
---
include/linux/sched/rt.h | 10 ++
kernel/locking/rtmutex.c | 36
2 files changed, 46 insertions(+)
diff --git a/include/linux/sched/rt.h b/include/linux/sched/rt.h
index
On 06-Jul-2016 09:13:25 AM, Steven Rostedt wrote:
> On Tue, 5 Jul 2016 21:50:34 + (UTC)
> Mathieu Desnoyers wrote:
>
> > >
> > >> +
> > >> +TP_PROTO(struct task_struct *tsk),
> > >> +
> > >> +TP_ARGS(tsk),
> > >> +
> > >> +
On 06-Jul-2016 09:13:25 AM, Steven Rostedt wrote:
> On Tue, 5 Jul 2016 21:50:34 + (UTC)
> Mathieu Desnoyers wrote:
>
> > >
> > >> +
> > >> +TP_PROTO(struct task_struct *tsk),
> > >> +
> > >> +TP_ARGS(tsk),
> > >> +
> > >> +TP_STRUCT__entry(
> > >> +
Reviewed-by: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Signed-off-by: Julien Desfossez <jdesfos...@efficios.com>
---
include/trace/events/sched.h | 68
kernel/fork.c| 1 +
kernel/sched/core.c | 3 ++
3 files
Reviewed-by: Mathieu Desnoyers
Signed-off-by: Julien Desfossez
---
include/trace/events/sched.h | 68
kernel/fork.c| 1 +
kernel/sched/core.c | 3 ++
3 files changed, 72 insertions(+)
diff --git a/include/trace/events/sched.h b
e=R ==> next_comm=burnP6 next_pid=7250
next_prio=39
Signed-off-by: Julien Desfossez <jdesfos...@efficios.com>
---
include/trace/events/sched.h | 21 -
kernel/sched/core.c | 1 +
2 files changed, 17 insertions(+), 5 deletions(-)
diff --git a/include/
e=R ==> next_comm=burnP6 next_pid=7250
next_prio=39
Signed-off-by: Julien Desfossez
---
include/trace/events/sched.h | 21 -
kernel/sched/core.c | 1 +
2 files changed, 17 insertions(+), 5 deletions(-)
diff --git a/include/trace/events/sched.h b/include
.
Signed-off-by: Julien Desfossez <jdesfos...@efficios.com>
---
include/linux/sched.h | 3 ++-
kernel/sched/core.c | 19 +--
2 files changed, 15 insertions(+), 7 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 52c4847..48b35c0 100644
--- a/include
.
Signed-off-by: Julien Desfossez
---
include/linux/sched.h | 3 ++-
kernel/sched/core.c | 19 +--
2 files changed, 15 insertions(+), 7 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 52c4847..48b35c0 100644
--- a/include/linux/sched.h
+++ b/include
Hi,
Here is a blog post related to detecting and understanding high
interrupt-processing latencies on real-time systems. It is based on a
new project called latency_tracker that hooks on the existing kernel
tracepoints and executes actions when high latency events occur.
Hi,
Here is a blog post related to detecting and understanding high
interrupt-processing latencies on real-time systems. It is based on a
new project called latency_tracker that hooks on the existing kernel
tracepoints and executes actions when high latency events occur.
Hi everyone,
I am glad to announce the very first release of the lttng-analyses project !
https://github.com/lttng/lttng-analyses
This project is a collection of tools to extract metrics and
higher-level informations from LTTng kernel traces.
Here is a complete example that illustrates how to
Hi everyone,
I am glad to announce the very first release of the lttng-analyses project !
https://github.com/lttng/lttng-analyses
This project is a collection of tools to extract metrics and
higher-level informations from LTTng kernel traces.
Here is a complete example that illustrates how to
98 matches
Mail list logo