From: Joel Fernandes
This is required for several usecases identified. One of them being tracing how
the segmented callback list changes. Tracing this has identified issues in RCU
code in the past.
>From Paul:
Another use case is of course more accurately determining whether a given CPU's
la
Signed-off-by: Joel Fernandes (Google)
---
kernel/rcu/rcu_segcblist.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/rcu/rcu_segcblist.c b/kernel/rcu/rcu_segcblist.c
index b0aaa51e0ee6..19ff82b805fb 100644
--- a/kernel/rcu/rcu_segcblist.c
+++ b/kernel/rcu/rcu_segcblist.c
@@ -10,7
Memory barriers are needed when updating the full length of the
segcblist, however it is not fully clearly why one is needed before and
after. This patch therefore adds additional comments to the function
header to explain it.
Signed-off-by: Joel Fernandes (Google)
---
kernel/rcu
This memory barrier is not needed as rcu_segcblist_add_len() already
includes a memory barrier *before* the length of the list is updated.
Same reasoning for rcu_segcblist_enqueue().
Signed-off-by: Joel Fernandes (Google)
---
kernel/rcu/rcu_segcblist.c | 1 -
kernel/rcu/tree.c | 1
On Mon, Oct 19, 2020 at 5:37 AM Frederic Weisbecker wrote:
>
[..]
> > >
> > > I'm very likely missing something obvious somewhere.
> > >
> > > CPU 0CPU 1
> > > rcu_barrier()
> > > call_rcu()/rcu_segcblist_enqueue()
> > >
APIs and support.
Tested-by: Julien Desfossez
Signed-off-by: Joel Fernandes (Google)
---
include/linux/sched.h | 2 +
kernel/sched/core.c | 241 --
kernel/sched/debug.c | 4 +
3 files changed, 236 insertions(+), 11 deletions(-)
diff --git a/include
From: Peter Zijlstra
Marks all tasks in a cgroup as matching for core-scheduling.
A task will need to be moved into the core scheduler queue when the cgroup
it belongs to is tagged to run with core scheduling. Similarly the task
will need to be moved out of the core scheduler queue when the
From: Vineeth Pillai
Similar to how user to kernel mode transitions are protected in earlier
patches, protect the entry into kernel from guest mode as well.
Tested-by: Julien Desfossez
Signed-off-by: Vineeth Pillai
---
arch/x86/kvm/x86.c| 3 +++
include/linux/entry-kvm.h | 12
During exit, we have to free the references to a cookie that might be shared by
many tasks. This commit therefore ensures when the task_struct is released, any
references to cookies that it holds are also released.
Tested-by: Julien Desfossez
Signed-off-by: Joel Fernandes (Google)
---
include
From: Peter Zijlstra
Tested-by: Julien Desfossez
Not-Signed-off-by: Peter Zijlstra (Intel)
---
kernel/sched/core.c | 37 -
1 file changed, 36 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index
Tested-by: Julien Desfossez
Signed-off-by: Joel Fernandes (Google)
---
tools/testing/selftests/sched/.gitignore | 1 +
tools/testing/selftests/sched/Makefile| 14 +
tools/testing/selftests/sched/config | 1 +
.../testing/selftests/sched/test_coresched.c | 840
This will be used by kselftest to verify the CGroup cookie value that is
set by the CGroup interface.
Tested-by: Julien Desfossez
Signed-off-by: Joel Fernandes (Google)
---
kernel/sched/core.c | 23 +++
1 file changed, 23 insertions(+)
diff --git a/kernel/sched/core.c b
the camera streaming frame rate by ~3%.
Tested-by: Julien Desfossez
Signed-off-by: Joel Fernandes (Google)
---
include/linux/sched.h| 2 ++
include/uapi/linux/prctl.h | 3 ++
kernel/sched/core.c | 51 +---
kernel/sys.c
Zijlstra (Intel)
Signed-off-by: Joel Fernandes (Google)
Acked-by: Paul E. McKenney
---
include/linux/sched.h | 1 +
kernel/sched/core.c | 130 +-
kernel/sched/idle.c | 1 +
kernel/sched/sched.h | 6 ++
4 files changed, 137 insertions(+), 1
Document the usecases, design and interfaces for core scheduling.
Co-developed-by: Vineeth Pillai
Tested-by: Julien Desfossez
Signed-off-by: Joel Fernandes (Google)
---
.../admin-guide/hw-vuln/core-scheduling.rst | 312 ++
Documentation/admin-guide/hw-vuln/index.rst | 1
' is a 8-bit value allowing for upto 256 unique colors. IMHO, having
more than these many CGroups sounds like a scalability issue so this suffices.
We steal the lower 8-bits of the cookie to set the color.
Tested-by: Julien Desfossez
Signed-off-by: Joel Fernandes (Google)
---
kernel/sched/core.c
-off-by: Joel Fernandes (Google)
---
drivers/gpu/drm/i915/i915_request.c | 4 ++--
include/linux/irq_work.h| 33 ++---
include/linux/irqflags.h| 4 ++--
kernel/bpf/stackmap.c | 2 +-
kernel/irq_work.c | 18
Due to earlier patches, the old way of computing a task's cookie when it
is added to a CGroup,is outdated. Update it by fetching the group's
cookie using the new helpers.
Tested-by: Julien Desfossez
Signed-off-by: Joel Fernandes (Google)
---
kernel/sched/core.c | 15 ++-
1 file
core.c is already huge. The core-tagging interface code is largely
independent of it. Move it to its own file to make both files easier to
maintain.
Tested-by: Julien Desfossez
Signed-off-by: Joel Fernandes (Google)
---
kernel/sched/Makefile | 1 +
kernel/sched/core.c| 481
. This can confuse the logic. Add a retry logic
if smt_mask changes between the loops.
Tested-by: Julien Desfossez
Signed-off-by: Peter Zijlstra (Intel)
Signed-off-by: Julien Desfossez
Signed-off-by: Vineeth Remanan Pillai
Signed-off-by: Joel Fernandes (Google)
Signed-off-by: Aaron Lu
Signed
From: Peter Zijlstra
Introduce the basic infrastructure to have a core wide rq->lock.
Tested-by: Julien Desfossez
Signed-off-by: Peter Zijlstra (Intel)
Signed-off-by: Julien Desfossez
Signed-off-by: Vineeth Remanan Pillai
---
kernel/Kconfig.preempt | 6 +++
kernel/sched/core.c| 109
From: Aubrey Li
- Don't migrate if there is a cookie mismatch
Load balance tries to move task from busiest CPU to the
destination CPU. When core scheduling is enabled, if the
task's cookie does not match with the destination CPU's
core cookie, this task will be skipped by
From: Peter Zijlstra
Tested-by: Julien Desfossez
Signed-off-by: Peter Zijlstra (Intel)
---
kernel/sched/fair.c | 12 ++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index bd6aed63f5e3..b4bc82f46fe7 100644
---
-by: Vineeth Pillai
Signed-off-by: Joel Fernandes (Google)
---
.../admin-guide/kernel-parameters.txt | 7 +
include/linux/entry-common.h | 2 +-
include/linux/sched.h | 12 +
kernel/entry/common.c | 25 +-
kernel/sched/core.c
From: Peter Zijlstra
Introduce task_struct::core_cookie as an opaque identifier for core
scheduling. When enabled; core scheduling will only allow matching
task to be on the core; where idle matches everything.
When task_struct::core_cookie is set (and core scheduling is enabled)
these tasks
usecase.
Tested-by: Julien Desfossez
Signed-off-by: Joel Fernandes (Google)
---
kernel/sched/core.c | 33 -
kernel/sched/fair.c | 40
kernel/sched/sched.h | 5 +
3 files changed, 65 insertions(+), 13 deletions(-)
diff
Add a generic_idle_{enter,exit} helper function to enter and exit kernel
protection when entering and exiting idle, respectively.
Tested-by: Julien Desfossez
Signed-off-by: Joel Fernandes (Google)
---
include/linux/entry-common.h | 18 ++
kernel/sched/idle.c | 11
.
Tested-by: Julien Desfossez
Signed-off-by: Joel Fernandes (Google)
---
arch/x86/include/asm/thread_info.h | 2 ++
kernel/sched/sched.h | 6 ++
2 files changed, 8 insertions(+)
diff --git a/arch/x86/include/asm/thread_info.h
b/arch/x86/include/asm/thread_info.h
index c448fcfa1b82
From: Vineeth Pillai
If there is only one long running local task and the sibling is
forced idle, it might not get a chance to run until a schedule
event happens on any cpu in the core.
So we check for this condition during a tick to see if a sibling
is starved and then give it a chance to
(Intel)
Signed-off-by: Vineeth Remanan Pillai
Signed-off-by: Julien Desfossez
Signed-off-by: Joel Fernandes (Google)
---
kernel/sched/deadline.c | 16 ++--
kernel/sched/fair.c | 32 +++-
kernel/sched/idle.c | 8
kernel/sched/rt.c
From: Peter Zijlstra
In preparation of playing games with rq->lock, abstract the thing
using an accessor.
Tested-by: Julien Desfossez
Signed-off-by: Peter Zijlstra (Intel)
Signed-off-by: Vineeth Remanan Pillai
Signed-off-by: Julien Desfossez
---
kernel/sched/core.c | 46
- Fix for 32bit build
- Aubrey Li
Aubrey Li (1):
sched: migration changes for core scheduling
Joel Fernandes (Google) (13):
sched/fair: Snapshot the min_vruntime of CPUs on force idle
arch/x86: Add a new TIF flag for untrusted tasks
kernel/entry: Add support for core-wide protection of kernel-mode
Hi,
Thanks Alan for your replies.
On Sat, Oct 17, 2020 at 1:24 PM Alan Stern wrote:
>
> [I sent this reply earlier, but since it hasn't shown up in the mailing
> list archives, I may have forgotten to include the proper CC's. At the
> risk of repeating myself, here it is again.]
Np, I did get
On Sat, Oct 17, 2020 at 7:31 AM Alan Stern wrote:
>
> On Fri, Oct 16, 2020 at 09:27:53PM -0400, j...@joelfernandes.org wrote:
> > Adding Alan as well as its memory barrier discussion ;-)
>
> I don't know the internals of how RCU works, so I'll just speak to the
> litmus test itself, ignoring
ues.
Fixed minor nit from Davidlohr.
v1->v3: minor nits.
(https://lore.kernel.org/lkml/20200719034210.2382053-1-joel@xxxxx/)
Joel Fernandes (Google) (6):
rcu/tree: Make rcu_do_batch count how many callbacks were executed
rcu/segcblist: Add counters to segcblist datastructure
rcu/trac
related to using donecbs's ->len field as a
temporary variable to save the segmented callback list's length. This cannot be
done anymore and is not needed.
Signed-off-by: Joel Fernandes (Google)
---
include/linux/rcu_segcblist.h | 2 +
kernel/rcu/rcu_segcblist.c|
from 0 is confusing and error-prone IMHO.
This commit therefore explicitly counts how many callbacks were executed in
rcu_do_batch() itself, and uses that to update the per-CPU segcb list's ->len
field, without relying on the negativity of rcl->len.
Signed-off-by: Joel Fernandes (Googl
Signed-off-by: Joel Fernandes (Google)
---
kernel/rcu/rcu_segcblist.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/rcu/rcu_segcblist.c b/kernel/rcu/rcu_segcblist.c
index 2dccbd29cd3a..271d5d9d7f60 100644
--- a/kernel/rcu/rcu_segcblist.c
+++ b/kernel/rcu/rcu_segcblist.c
@@ -10,7
Memory barriers are needed when updating the full length of the
segcblist, however it is not fully clearly why one is needed before and
after. This patch therefore adds additional comments to the function
header to explain it.
Signed-off-by: Joel Fernandes (Google)
---
kernel/rcu
in the respective segment.
Signed-off-by: Joel Fernandes (Google)
---
include/trace/events/rcu.h | 25 +
kernel/rcu/rcu_segcblist.c | 31 +++
kernel/rcu/rcu_segcblist.h | 5 +
kernel/rcu/tree.c | 9 +
4 files changed, 70 insertions
This memory barrier is not needed as rcu_segcblist_add_len() already
includes a memory barrier.
Signed-off-by: Joel Fernandes (Google)
---
kernel/rcu/tree.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 346a05506935..6c6d3c7036e6 100644
On Fri, Oct 9, 2020 at 4:14 PM Frederic Weisbecker wrote:
>
> On Wed, Sep 23, 2020 at 11:22:08AM -0400, Joel Fernandes (Google) wrote:
> > Currently, rcu_do_batch() depends on the unsegmented callback list's len
> > field
> > to know how many CBs are executed. This f
The following commit has been merged into the core/rcu branch of tip:
Commit-ID: 53922270d21de707a1a0ffaf1e07644e77fcb8db
Gitweb:
https://git.kernel.org/tip/53922270d21de707a1a0ffaf1e07644e77fcb8db
Author:Joel Fernandes (Google)
AuthorDate:Thu, 18 Jun 2020 16:29:49 -04:00
The following commit has been merged into the core/rcu branch of tip:
Commit-ID: 959954df0ca7da2111c3fb67a81798d15b9d
Gitweb:
https://git.kernel.org/tip/959954df0ca7da2111c3fb67a81798d15b9d
Author:Joel Fernandes (Google)
AuthorDate:Thu, 18 Jun 2020 16:29:55 -04:00
The following commit has been merged into the core/rcu branch of tip:
Commit-ID: a7886e899fd8334a03d37e66ad10295d175725ea
Gitweb:
https://git.kernel.org/tip/a7886e899fd8334a03d37e66ad10295d175725ea
Author:Joel Fernandes (Google)
AuthorDate:Thu, 18 Jun 2020 21:36:40 -04:00
The following commit has been merged into the core/rcu branch of tip:
Commit-ID: c30068f41a0e899f870e0158a2c69c68d738bf96
Gitweb:
https://git.kernel.org/tip/c30068f41a0e899f870e0158a2c69c68d738bf96
Author:Joel Fernandes (Google)
AuthorDate:Thu, 18 Jun 2020 21:36:39 -04:00
The following commit has been merged into the core/rcu branch of tip:
Commit-ID: 666ca2907e6b75960ce2f0fe50afc5d8a46f296d
Gitweb:
https://git.kernel.org/tip/666ca2907e6b75960ce2f0fe50afc5d8a46f296d
Author:Joel Fernandes (Google)
AuthorDate:Fri, 07 Aug 2020 13:07:20 -04:00
The following commit has been merged into the core/rcu branch of tip:
Commit-ID: f37599e6f06da47e49c3408afe66c5b6e83a90bd
Gitweb:
https://git.kernel.org/tip/f37599e6f06da47e49c3408afe66c5b6e83a90bd
Author:Joel Fernandes (Google)
AuthorDate:Fri, 07 Aug 2020 13:07:19 -04:00
when ready cbs were present, to when the ready callbacks were
invoked by the rcuop thread. This also further confirms that there is no
need to raise the softirq for ready cbs in the first place.
Cc: neer...@codeaurora.org
Signed-off-by: Joel Fernandes (Google)
---
v1->v2: Also cleaned up anot
On Fri, Oct 2, 2020 at 3:34 PM Paul E. McKenney wrote:
>
> On Tue, Sep 29, 2020 at 03:32:48PM -0400, Joel Fernandes wrote:
> > Hi Paul,
> >
> > On Tue, Sep 29, 2020 at 03:29:28PM -0400, Joel Fernandes (Google) wrote:
> > > RCU's hotplug design will help u
On Wed, Sep 30, 2020 at 6:42 PM Lokesh Gidra wrote:
>
> On Wed, Sep 30, 2020 at 3:32 PM Kirill A. Shutemov
> wrote:
> >
> > On Wed, Sep 30, 2020 at 10:21:17PM +, Kalesh Singh wrote:
> > > mremap time can be optimized by moving entries at the PMD/PUD level if
> > > the source and destination
On Wed, Sep 30, 2020 at 1:22 PM Michal Hocko wrote:
> > > > I think documenting is useful.
> > > >
> > > > Could it be more explicit in what the issue is? Something like:
> > > >
> > > > * Even with GFP_ATOMIC, calls to the allocator can sleep on PREEMPT_RT
> > > > systems. Therefore, the
On Wed, Sep 30, 2020 at 12:48 PM Michal Hocko wrote:
>
> On Wed 30-09-20 11:25:17, Joel Fernandes wrote:
> > On Fri, Sep 25, 2020 at 05:47:41PM +0200, Michal Hocko wrote:
> > > On Fri 25-09-20 17:31:29, Uladzislau Rezki wrote:
> > > > > &
On Fri, Sep 18, 2020 at 09:48:13PM +0200, Uladzislau Rezki (Sony) wrote:
> Hello, folk!
>
> This is another iteration of fixing kvfree_rcu() issues related
> to CONFIG_PROVE_RAW_LOCK_NESTING and CONFIG_PREEMPT_RT configs.
>
> The first discussion is here https://lkml.org/lkml/2020/8/9/195.
>
>
On Wed, Sep 30, 2020 at 04:39:53PM +0200, Vlastimil Babka wrote:
> On 9/30/20 12:07 AM, Uladzislau Rezki wrote:
> > On Tue, Sep 29, 2020 at 12:15:34PM +0200, Vlastimil Babka wrote:
> >> On 9/18/20 9:48 PM, Uladzislau Rezki (Sony) wrote:
> >>
> >> After reading all the threads and mulling over
On Fri, Sep 25, 2020 at 05:47:41PM +0200, Michal Hocko wrote:
> On Fri 25-09-20 17:31:29, Uladzislau Rezki wrote:
> > > > > >
> > > > > > All good points!
> > > > > >
> > > > > > On the other hand, duplicating a portion of the allocator
> > > > > > functionality
> > > > > > within RCU increases
Hi Paul,
On Tue, Sep 29, 2020 at 03:29:28PM -0400, Joel Fernandes (Google) wrote:
> RCU's hotplug design will help understand the requirements an RCU
> implementation needs to fullfill, such as dead-lock avoidance.
>
> The rcu_barrier() section of the "Hotplug CPU" section
l E. McKenney
Signed-off-by: Joel Fernandes (Google)
---
kernel/rcu/tree.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 55d3700dd1e7..5efe0a98ea45 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -4119,7 +4119,9 @@ v
is rather incomplete.
This commit therefore continues the section by describing how RCU's
design handles CPU hotplug in a deadlock-free way.
Signed-off-by: Joel Fernandes (Google)
---
.../RCU/Design/Requirements/Requirements.rst | 30 +--
1 file changed, 28 insertions(+), 2
ee P1's update of numonline because the atomic_read()
does not provide any ordering.
Btw, its cool how Paul/you removed the need for memory-barriers for the
single-CPU case at all, by making the update to num_online_cpus in rcu_state,
from the CPU doing the onlining and offlining.
Reviewed-by:
On Mon, Sep 28, 2020 at 01:34:31PM -0700, Kees Cook wrote:
> On Sun, Sep 27, 2020 at 07:35:26PM -0400, Joel Fernandes wrote:
> > On Fri, Sep 25, 2020 at 05:47:14PM -0600, Shuah Khan wrote:
> > > This patch series is a result of discussion at the refcount_t BOF
> > > th
-time
jitter.
Passed 30 minute tests of TREE01 through TREE09 each.
Cc: neer...@codeaurora.org
Cc: fweis...@gmail.com
Signed-off-by: Joel Fernandes (Google)
---
kernel/rcu/tree.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index
On Thu, Sep 24, 2020 at 12:04:10PM +0530, Neeraj Upadhyay wrote:
> Clarify the "x" in rcuox/N naming in RCU_NOCB_CPU config
> description.
>
Reviewed-by: Joel Fernandes (Google)
thanks,
- Joel
> Signed-off-by: Neeraj Upadhyay
> ---
> kernel/rcu/Kconfig | 11 +
on doesn't change the overflow wrap around behavior.
>
> Signed-off-by: Shuah Khan
Reviewed-by: Joel Fernandes (Google)
thanks,
- Joel
> ---
> drivers/android/binder.c | 41 ---
> drivers/android/binder_internal.h | 3 ++-
> 2 files
On Fri, Sep 25, 2020 at 05:47:14PM -0600, Shuah Khan wrote:
> This patch series is a result of discussion at the refcount_t BOF
> the Linux Plumbers Conference. In this discussion, we identified
> a need for looking closely and investigating atomic_t usages in
> the kernel when it is used strictly
ree.c
> +++ b/kernel/rcu/tree.c
> @@ -3165,8 +3165,7 @@ static void kfree_rcu_work(struct work_s
> bkvhead[i] = NULL;
> krc_this_cpu_unlock(krcp, flags);
>
> - if (bkvhead[i])
> -
On Sat, Sep 26, 2020 at 10:10 AM Namhyung Kim wrote:
[...]
> On Sat, Sep 26, 2020 at 8:56 AM Joel Fernandes (Google)
> wrote:
> >
> > perf sched latency is really useful at showing worst-case latencies that
> > task
> > encountered since wakeup. However it sho
spending a lot of time
backtracking to the start of the latency in "perf sched script" which wastes a
lot of time.
This patch therefore adds a new column "Max delay start". Considering this,
also rename "Maximum delay at" to "Max delay end" as its easier to un
On Tue, Sep 22, 2020 at 09:52:43PM -0400, Joel Fernandes wrote:
> On Tue, Sep 22, 2020 at 09:46:22PM -0400, Joel Fernandes wrote:
> > On Fri, Aug 28, 2020 at 11:29:27PM +0200, Peter Zijlstra wrote:
> > >
> > >
> > > This is still a horrible patch..
> &
Signed-off-by: Joel Fernandes (Google)
---
kernel/rcu/rcu_segcblist.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/rcu/rcu_segcblist.c b/kernel/rcu/rcu_segcblist.c
index df0f31e30947..b65ac8c85b56 100644
--- a/kernel/rcu/rcu_segcblist.c
+++ b/kernel/rcu/rcu_segcblist.c
@@ -10,7
in the respective segment.
Signed-off-by: Joel Fernandes (Google)
---
include/trace/events/rcu.h | 25 +
kernel/rcu/rcu_segcblist.c | 34 ++
kernel/rcu/rcu_segcblist.h | 5 +
kernel/rcu/tree.c | 9 +
4 files changed, 73 insertions
related to using donecbs's ->len field as a
temporary variable to save the segmented callback list's length. This cannot be
done anymore and is not needed.
Signed-off-by: Joel Fernandes (Google)
---
include/linux/rcu_segcblist.h | 2 +
kernel/rcu/rcu_segcblist.c|
from 0 is confusing and error-prone IMHO.
This commit therefore explicitly counts have many callbacks were executed in
rcu_do_batch() itself, and uses that to update the per-CPU segcb list's ->len
field, without relying on the negativity of rcl->len.
Signed-off-by: Joel Fernandes (Google)
---
2053-1-j...@joelfernandes.org/)
Joel Fernandes (Google) (4):
rcu/tree: Make rcu_do_batch count how many callbacks were executed
rcu/segcblist: Add counters to segcblist datastructure
rcu/trace: Add tracing for how segcb list changes
rcu/segcblist: Remove useless rcupdate.h include
include
On Tue, Sep 22, 2020 at 09:46:22PM -0400, Joel Fernandes wrote:
> On Fri, Aug 28, 2020 at 11:29:27PM +0200, Peter Zijlstra wrote:
> >
> >
> > This is still a horrible patch..
>
> Hi Peter,
> I wrote a new patch similar to this one and it fares much better in my t
series. I will provide an updated patch later based on v7 series.
(Works only for SMT2, maybe we can generalize it more..)
8<---
From: "Joel Fernandes (Google)"
Subject: [PATCH] sched: Sync the min_vruntime of cores when the system enters
force-idle
This patch provide
On Sun, Sep 20, 2020 at 09:21:47PM -0400, Joel Fernandes (Google) wrote:
>
> NOTE: I marked as RFC since TREE 04 fails even though TREE03 passes. I don't
> see any RCU errors in the counters, however when shutdown thread tries to
> shutdown the system, it hangs when trying
from 0 is confusing and error-prone IMHO.
This commit therefore explicitly counts have many callbacks were executed in
rcu_do_batch() itself, and uses that to update the per-CPU segcb list's ->len
field, without relying on the negativity of rcl->len.
Signed-off-by: Joel Fernandes (Google)
---
ues.
Fixed minor nit from Davidlohr.
v1->v3: minor nits.
(https://lore.kernel.org/lkml/20200719034210.2382053-1-j...@joelfernandes.org/)
Joel Fernandes (Google) (5):
rcu/tree: Make rcu_do_batch count how many callbacks were executed
rcu/segcblist: Add counters to segcblist datastructure
rc
Signed-off-by: Joel Fernandes (Google)
---
kernel/rcu/rcu_segcblist.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/rcu/rcu_segcblist.c b/kernel/rcu/rcu_segcblist.c
index 72b284f965aa..13f8f181521d 100644
--- a/kernel/rcu/rcu_segcblist.c
+++ b/kernel/rcu/rcu_segcblist.c
@@ -10,7
rcu_barrier() may skip queuing a callback and return too early. Fix it by
storing
state to indicate that callbacks are being invoked and the callback list should
not appear as non-empty. This is a terrible hack, however it still does not fix
TREE04.
Signed-off-by: Joel Fernandes (Google)
---
include
in the respective segment.
Signed-off-by: Joel Fernandes (Google)
---
include/trace/events/rcu.h | 25 +
kernel/rcu/rcu_segcblist.c | 34 ++
kernel/rcu/rcu_segcblist.h | 5 +
kernel/rcu/tree.c | 9 +
4 files changed, 73 insertions
related to using donecbs's ->len field as a
temporary variable to save the segmented callback list's length. This is not
needed any more.
Signed-off-by: Joel Fernandes (Google)
---
include/linux/rcu_segcblist.h | 2 +
kernel/rcu/rcu_segcblist.c|
On Fri, Aug 28, 2020 at 03:51:09PM -0400, Julien Desfossez wrote:
> From: Peter Zijlstra
>
> Instead of only selecting a local task, select a task for all SMT
> siblings for every reschedule on the core (irrespective which logical
> CPU does the reschedule).
>
> During a CPU hotplug event,
On Mon, Sep 14, 2020 at 03:47:38PM -0700, Jakub Kicinski wrote:
> On Mon, 14 Sep 2020 16:21:22 -0400 Joel Fernandes wrote:
> > On Tue, Sep 08, 2020 at 05:27:51PM -0700, Jakub Kicinski wrote:
> > > On Tue, 08 Sep 2020 21:15:56 +0300 niko...@cumulusnetworks.com wrote:
> > &
I give up, lockdep_is_held() is not defined without
> CONFIG_LOCKDEP, let's just go with your patch..
Care to send a patch just for the RCU macro then? Not sure what Dave is
applying but if the net-next tree is not taking the RCU macro change, then
send another one with my tag:
Reviewed-by: Jo
On Wed, Sep 09, 2020 at 04:22:41AM -0700, Paul E. McKenney wrote:
> On Wed, Sep 09, 2020 at 07:03:39AM +, Zhang, Qiang wrote:
> >
> > When config preempt RCU, and then there are multiple levels node, the
> > current task is preempted in rcu read critical region.
> > the current task be
On Mon, Sep 14, 2020 at 07:55:18AM +, Zhang, Qiang wrote:
> Hello Paul
>
> I have some questions for you .
> in force_qs_rnp func , if "f(rdp)" func return true we will call
> rcu_report_qs_rnp func
> report a quiescent state for this rnp node, and clear grpmask form
> rnp->qsmask.
>
Hi Rob,
(Back from holidays, digging through the email pile). Reply below:
On Thu, Sep 3, 2020 at 2:09 PM Rob Herring wrote:
>
> On Wed, Sep 2, 2020 at 3:47 PM Joel Fernandes wrote:
> >
> > On Wed, Sep 2, 2020 at 4:01 PM Nachammai Karuppiah
> > wrote:
> > >
On Sat, Sep 05, 2020 at 05:24:06PM -0400, Joel Fernandes wrote:
> Hi Paul,
>
> On Thu, Sep 03, 2020 at 01:06:39PM -0700, Paul E. McKenney wrote:
> > On Wed, Sep 02, 2020 at 08:54:10AM -0700, Paul E. McKenney wrote:
> > > On Tue, Sep 01, 2020 at 06:51:28PM -070
Hi Paul,
On Thu, Sep 03, 2020 at 01:06:39PM -0700, Paul E. McKenney wrote:
> On Wed, Sep 02, 2020 at 08:54:10AM -0700, Paul E. McKenney wrote:
> > On Tue, Sep 01, 2020 at 06:51:28PM -0700, Davidlohr Bueso wrote:
> > > On Tue, 01 Sep 2020, Paul E. McKenney wrote:
> > >
> > > > And it appears that
On Thu, Sep 3, 2020 at 9:20 AM Thomas Gleixner wrote:
>
> On Thu, Sep 03 2020 at 00:34, Joel Fernandes wrote:
> > On Wed, Sep 2, 2020 at 12:57 PM Dario Faggioli wrote:
> >> 2) protection of the kernel from the other thread running in userspace
> >> may be achieved
On Thu, Sep 3, 2020 at 9:43 AM Dario Faggioli wrote:
>
> On Thu, 2020-09-03 at 00:34 -0400, Joel Fernandes wrote:
> > On Wed, Sep 2, 2020 at 12:57 PM Dario Faggioli
> > wrote:
> > > 2) protection of the kernel from the other thread running in
> > > userspac
On Wed, Sep 2, 2020 at 12:57 PM Dario Faggioli wrote:
>
> On Wed, 2020-09-02 at 09:53 +0200, Thomas Gleixner wrote:
> > On Tue, Sep 01 2020 at 21:29, Joel Fernandes wrote:
> > > On Tue, Sep 01, 2020 at 10:02:10PM +0200, Thomas Gleixner wrote:
> > > >
> > &g
On Wed, Sep 2, 2020 at 5:47 PM Joel Fernandes wrote:
>
> On Wed, Sep 2, 2020 at 4:01 PM Nachammai Karuppiah
> wrote:
> >
> > Hi,
> >
> > This patch series adds support to store trace events in pstore.
> >
Been a long day...
> > Stori
On Wed, Sep 2, 2020 at 4:01 PM Nachammai Karuppiah
wrote:
>
> Hi,
>
> This patch series adds support to store trace events in pstore.
>
> Storing trace entries in persistent RAM would help in understanding what
> happened just before the system went down. The trace events that led to the
> crash
Hi Thomas,
On Wed, Sep 02, 2020 at 09:53:29AM +0200, Thomas Gleixner wrote:
[...]
> >> --- /dev/null
> >> +++ b/include/linux/pretend_ht_secure.h
> >> @@ -0,0 +1,21 @@
> >> +#ifndef _LINUX_PRETEND_HT_SECURE_H
> >> +#define _LINUX_PRETEND_HT_SECURE_H
> >> +
> >> +#ifdef CONFIG_PRETEND_HT_SECURE
>
Hi Thomas,
On Tue, Sep 01, 2020 at 10:02:10PM +0200, Thomas Gleixner wrote:
[..]
> > The reason for that is, the loop can switch into another thread, so we
> > have to do unsafe_exit() for the old thread, and unsafe_enter() for
> > the new one while handling the tif work properly. We could get
>
On Tue, Sep 1, 2020 at 5:23 PM Vineeth Pillai
wrote:
> > Also, Peter said pick_seq is for core-wide picking. If you want to add
> > another semantic, then maybe add another counter which has a separate
> > meaning and justify why you are adding it.
> I think just one counter is enough. Unless,
Hi Vineeth,
On Tue, Sep 01, 2020 at 08:34:23AM -0400, Vineeth Pillai wrote:
> Hi Joel,
>
> On 9/1/20 1:10 AM, Joel Fernandes wrote:
> > 3. The 'Rescheduling siblings' loop of pick_next_task() is quite fragile. It
> > calls various functions on rq->core_pick which
201 - 300 of 4053 matches
Mail list logo