Fix parameter name for __page_to_voff, to match its definition.
At present, we don't see any issue, as page_to_virt's caller
declares 'page'.
Fixes: 9f2875912dac ("arm64: mm: restrict virt_to_page() to the linear mapping")
Signed-off-by: Neeraj Upadhyay <neer...@codeaurora.org>
On 08/08/2017 11:32 PM, Paul E. McKenney wrote:
On Tue, Aug 08, 2017 at 10:50:26PM +0530, Neeraj Upadhyay wrote:
If rcu_kick_kthreads is set, and gp is in progress, check_cpu_stall()
does checks to figure out whether jiffies is past rsp->jiffies_stall,
doing ordered accesses to avoid
can be skipped if rcu_cpu_stall_suppress is set.
Fixes: 8c7c4829a81c ("rcu: Awaken grace-period kthread if too long since FQS")
Signed-off-by: Neeraj Upadhyay <neer...@codeaurora.org>
---
kernel/rcu/tree.c | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/
On 08/07/2017 06:10 PM, Paul E. McKenney wrote:
On Mon, Aug 07, 2017 at 11:20:10AM +0530, Neeraj Upadhyay wrote:
Pending cbs check in rcu_prepare_for_idle is inversed
in the sense that, it should accelerate if there are
pending cbs; but, the check does the opposite. So,
fix it.
Fixes
Pending cbs check in rcu_prepare_for_idle is inversed
in the sense that, it should accelerate if there are
pending cbs; but, the check does the opposite. So,
fix it.
Fixes: 15fecf89e46a ("srcu: Abstract multi-tail callback list handling")
Signed-off-by: Neeraj Upadhyay <neer...@c
Hi,
We have one query regarding the behavior of RCU expedited grace period,
for scenario where resched_cpu() in sync_sched_exp_handler() fails to
acquire the rq lock and returns w/o setting the need_resched. In this
case, how do we ensure that the CPU notify rcu about the
end of sched grace
On 09/17/2017 06:30 AM, Paul E. McKenney wrote:
On Fri, Sep 15, 2017 at 04:44:38PM +0530, Neeraj Upadhyay wrote:
Hi,
We have one query regarding the behavior of RCU expedited grace period,
for scenario where resched_cpu() in sync_sched_exp_handler() fails to
acquire the rq lock and returns w
On 08/31/2017 06:42 AM, Tejun Heo wrote:
On Wed, Aug 30, 2017 at 06:03:19PM -0700, Tejun Heo wrote:
Oops, more like the following.
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index df2e0f1..6f34025 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -683,7
Fix this by adding a checking to verify that css is set from
cgroup_taskset_first(), before proceeding.
Signed-off-by: Neeraj Upadhyay <neer...@codeaurora.org>
---
Hi,
We observed this issue for cgroup code corresponding to stable
v4.4.85 snapshot 3144d81 ("cgroup, kthread: close race wi
iter_next()
__put_task_struct()
Fix this problem, by moving the css_set and cg_list fetch in
cgroup_exit() inside css_set lock.
Signed-off-by: Neeraj Upadhyay <neer...@codeaurora.org>
---
Hi,
We observed this issue for cgroup code corresponding to stable
v4.4.85 snapshot 3
Hi,
We have one query regarding the __hrtimer_get_next_event().
The expires_next.tv64 is set to 0 if it is < 0. We observed
an hrtimer interrupt storm for one of the hrtimers with
below properties:
* Expires for the hrtimer was set to KTIME_MAX.
* cpu base was HRTIMER_BASE_REALTIME with
Hi,
One query regarding srcu_funnel_exp_start() function in
kernel/rcu/srcutree.c.
static void srcu_funnel_exp_start(struct srcu_struct *sp, struct
srcu_node *snp,
unsigned long s)
{
if (!ULONG_CMP_LT(sp->srcu_gp_seq_needed_exp, s))
c = 15152092)
ts_delta = (tv_sec = -1508803200, tv_nsec = -767601)
wall_to_monotonic is bigger than the ts_delta, so leading
to wall_to_monotonic become positive value, resulting in
-ve off_real.
Thanks
Neeraj
On 10/26/2017 06:27 PM, Thomas Gleixner wrote:
On Thu, 26 Oct 2017, Neeraj Upadhyay wro
On 10/27/2017 05:56 PM, Paul E. McKenney wrote:
On Fri, Oct 27, 2017 at 02:23:07PM +0530, Neeraj Upadhyay wrote:
Hi,
One query regarding srcu_funnel_exp_start() function in
kernel/rcu/srcutree.c.
static void srcu_funnel_exp_start(struct srcu_struct *sp, struct
srcu_node *snp
On 10/28/2017 03:50 AM, Paul E. McKenney wrote:
On Fri, Oct 27, 2017 at 10:15:04PM +0530, Neeraj Upadhyay wrote:
On 10/27/2017 05:56 PM, Paul E. McKenney wrote:
On Fri, Oct 27, 2017 at 02:23:07PM +0530, Neeraj Upadhyay wrote:
Hi,
One query regarding srcu_funnel_exp_start() function in
kernel
On 01/18/2018 08:32 AM, Lai Jiangshan wrote:
On Wed, Jan 17, 2018 at 4:08 AM, Neeraj Upadhyay <neer...@codeaurora.org> wrote:
On 01/16/2018 11:05 PM, Tejun Heo wrote:
Hello, Neeraj.
On Mon, Jan 15, 2018 at 02:08:12PM +0530, Neeraj Upadhyay wrote:
- kworker/0:0 gets chance to run o
x this by deferring the work to some other idle worker,
if the current worker is not bound to its pool's CPU.
Signed-off-by: Neeraj Upadhyay <neer...@codeaurora.org>
---
kernel/workqueue.c | 11 +++
1 file changed, 11 insertions(+)
diff --git a/kernel/workqueue.c b/kernel/workque
The smp_mb() in cpuhp_thread_fun() appears to be misplaced, and
need to be after the load of st->should_run, to prevent
reordering of the later load/stores w.r.t. the load of
st->should_run.
Signed-off-by: Neeraj Upadhyay
---
kernel/cpu.c | 6 +++---
1 file changed, 3 insertions(+), 3 del
On 09/05/2018 06:47 PM, Thomas Gleixner wrote:
On Wed, 5 Sep 2018, Neeraj Upadhyay wrote:
On 09/05/2018 05:53 PM, Thomas Gleixner wrote:
And looking closer this is a general issue. Just that the TEARDOWN state
makes it simple to observe. It's universaly broken, when the first teardown
On 09/06/2018 01:48 PM, Thomas Gleixner wrote:
On Thu, 6 Sep 2018, Neeraj Upadhyay wrote:
On 09/05/2018 06:47 PM, Thomas Gleixner wrote:
On Wed, 5 Sep 2018, Neeraj Upadhyay wrote:
On 09/05/2018 05:53 PM, Thomas Gleixner wrote:
And looking closer this is a general issue. Just
On 09/05/2018 05:53 PM, Thomas Gleixner wrote:
On Wed, 5 Sep 2018, Thomas Gleixner wrote:
On Tue, 4 Sep 2018, Neeraj Upadhyay wrote:
ret = cpuhp_down_callbacks(cpu, st, target);
if (ret && st->state > CPUHP_TEARDOWN_CPU &&
On 01/16/2018 11:05 PM, Tejun Heo wrote:
Hello, Neeraj.
On Mon, Jan 15, 2018 at 02:08:12PM +0530, Neeraj Upadhyay wrote:
- kworker/0:0 gets chance to run on cpu1; while processing
a work, it goes to sleep. However, it does not decrement
pool->nr_running. This is because WORKER_REBO
reset directly in
_cpu_down().
Fixes: 4dddfb5faa61 ("smp/hotplug: Rewrite AP state machine core")
Signed-off-by: Neeraj Upadhyay
---
kernel/cpu.c | 9 -
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/kernel/cpu.c b/kernel/cpu.c
index aa7fe85..9f49edb 100644
--
Fix this by adding a checking to verify that css is set from
cgroup_taskset_first(), before proceeding.
Signed-off-by: Neeraj Upadhyay
---
Hi,
We observed this issue for cgroup code corresponding to stable
v4.4.85 snapshot 3144d81 ("cgroup, kthread: close race window where
new kthreads can be
iter_next()
__put_task_struct()
Fix this problem, by moving the css_set and cg_list fetch in
cgroup_exit() inside css_set lock.
Signed-off-by: Neeraj Upadhyay
---
Hi,
We observed this issue for cgroup code corresponding to stable
v4.4.85 snapshot 3144d81 ("cgroup, kthre
Hi,
We have one query regarding the __hrtimer_get_next_event().
The expires_next.tv64 is set to 0 if it is < 0. We observed
an hrtimer interrupt storm for one of the hrtimers with
below properties:
* Expires for the hrtimer was set to KTIME_MAX.
* cpu base was HRTIMER_BASE_REALTIME with
c = 15152092)
ts_delta = (tv_sec = -1508803200, tv_nsec = -767601)
wall_to_monotonic is bigger than the ts_delta, so leading
to wall_to_monotonic become positive value, resulting in
-ve off_real.
Thanks
Neeraj
On 10/26/2017 06:27 PM, Thomas Gleixner wrote:
On Thu, 26 Oct 2017, Neeraj Upadhyay wro
Hi,
One query regarding srcu_funnel_exp_start() function in
kernel/rcu/srcutree.c.
static void srcu_funnel_exp_start(struct srcu_struct *sp, struct
srcu_node *snp,
unsigned long s)
{
if (!ULONG_CMP_LT(sp->srcu_gp_seq_needed_exp, s))
On 10/27/2017 05:56 PM, Paul E. McKenney wrote:
On Fri, Oct 27, 2017 at 02:23:07PM +0530, Neeraj Upadhyay wrote:
Hi,
One query regarding srcu_funnel_exp_start() function in
kernel/rcu/srcutree.c.
static void srcu_funnel_exp_start(struct srcu_struct *sp, struct
srcu_node *snp
On 10/28/2017 03:50 AM, Paul E. McKenney wrote:
On Fri, Oct 27, 2017 at 10:15:04PM +0530, Neeraj Upadhyay wrote:
On 10/27/2017 05:56 PM, Paul E. McKenney wrote:
On Fri, Oct 27, 2017 at 02:23:07PM +0530, Neeraj Upadhyay wrote:
Hi,
One query regarding srcu_funnel_exp_start() function in
kernel
Pending cbs check in rcu_prepare_for_idle is inversed
in the sense that, it should accelerate if there are
pending cbs; but, the check does the opposite. So,
fix it.
Fixes: 15fecf89e46a ("srcu: Abstract multi-tail callback list handling")
Signed-off-by: Neeraj Upadhyay
---
On 08/07/2017 06:10 PM, Paul E. McKenney wrote:
On Mon, Aug 07, 2017 at 11:20:10AM +0530, Neeraj Upadhyay wrote:
Pending cbs check in rcu_prepare_for_idle is inversed
in the sense that, it should accelerate if there are
pending cbs; but, the check does the opposite. So,
fix it.
Fixes
can be skipped if rcu_cpu_stall_suppress is set.
Fixes: 8c7c4829a81c ("rcu: Awaken grace-period kthread if too long since FQS")
Signed-off-by: Neeraj Upadhyay
---
kernel/rcu/tree.c | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu
On 08/08/2017 11:32 PM, Paul E. McKenney wrote:
On Tue, Aug 08, 2017 at 10:50:26PM +0530, Neeraj Upadhyay wrote:
If rcu_kick_kthreads is set, and gp is in progress, check_cpu_stall()
does checks to figure out whether jiffies is past rsp->jiffies_stall,
doing ordered accesses to avoid
On 08/31/2017 06:42 AM, Tejun Heo wrote:
On Wed, Aug 30, 2017 at 06:03:19PM -0700, Tejun Heo wrote:
Oops, more like the following.
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index df2e0f1..6f34025 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -683,7
Hi,
We have one query regarding the behavior of RCU expedited grace period,
for scenario where resched_cpu() in sync_sched_exp_handler() fails to
acquire the rq lock and returns w/o setting the need_resched. In this
case, how do we ensure that the CPU notify rcu about the
end of sched grace
On 09/17/2017 06:30 AM, Paul E. McKenney wrote:
On Fri, Sep 15, 2017 at 04:44:38PM +0530, Neeraj Upadhyay wrote:
Hi,
We have one query regarding the behavior of RCU expedited grace period,
for scenario where resched_cpu() in sync_sched_exp_handler() fails to
acquire the rq lock and returns w
x this by deferring the work to some other idle worker,
if the current worker is not bound to its pool's CPU.
Signed-off-by: Neeraj Upadhyay
---
kernel/workqueue.c | 11 +++
1 file changed, 11 insertions(+)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 43d18cb..71c0023 100644
--
On 1/12/2021 11:01 PM, Paul E. McKenney wrote:
On Mon, Jan 11, 2021 at 05:15:58PM +0530, Neeraj Upadhyay wrote:
Correctly trace whether the outgoing cpu blocks current gp in
rcutree_dying_cpu().
Signed-off-by: Neeraj Upadhyay
Good catch, queued, thank you! Please see below for my usual
On 9/23/2020 8:52 PM, Joel Fernandes (Google) wrote:
Currently, rcu_do_batch() depends on the unsegmented callback list's len field
to know how many CBs are executed. This fields counts down from 0 as CBs are
dequeued. It is possible that all CBs could not be run because of reaching
limits
On 9/23/2020 8:52 PM, Joel Fernandes (Google) wrote:
Track how the segcb list changes before/after acceleration, during
queuing and during dequeuing.
This has proved useful to discover an optimization to avoid unwanted GP
requests when there are no callbacks accelerated. The overhead is
Hi James,
Have few queries on ARM SDEI Linux code. Queries are listed below; can
you please help provide your insights on these?
1. Looks like interrupt bind interface (SDEI_1_0_FN_SDEI_INTERRUPT_BIND)
is not available for clients to use; can you please share information on
why it is not
invoked by the rcuoc kthread. This provides further evidence that
there is no need to invoke rcu_core() for offloaded callbacks that are
ready to invoke.
Cc: Neeraj Upadhyay
Signed-off-by: Joel Fernandes (Google)
Signed-off-by: Paul E. McKenney
Reviewed
h waits for all read side sections, where
incoming/outgoing cpus are considered online, for RCU i.e. after
rcu_cpu_starting() and before rcu_report_dead().
Signed-off-by: Neeraj Upadhyay
---
Below is the reproducer for issue described in point 3; this snippet
is based on klitmus generated test
ue being 0,
for these smp_call_function() callbacks running from idle loop.
However, this commit missed updating a preexisting underflow check
of dynticks_nmi_nesting, which checks for a non zero positive value.
Fix this warning and while at it, read the counter only once.
Signed-off-by: Neeraj Upadhyay
---
Hi,
I wa
ich is done
in RCU_GP_WAIT_FQS) its possible that RCU kthread never wakes up.
Report the same from stall warnings, if GP thread is in RCU_GP_WAIT_FQS
state, and the timeout has elapsed and the kthread is not woken.
Signed-off-by: Neeraj Upadhyay
---
kernel/rcu/tree.c | 25 +++-
Hi Paul,
On 11/17/2020 6:10 AM, paul...@kernel.org wrote:
From: "Paul E. McKenney"
There is a need for a polling interface for SRCU grace periods. This
polling needs to distinguish between an SRCU instance being idle on the
one hand or in the middle of a grace period on the other. This
Hi,
For ARM cortex A76, A77, A78 cores (which as per TRM, support AMU)
AA64PFR0[47:44] field is not set, and AMU does not get enabled for them.
Can you please provide support for these CPUs in cpufeature.c?
Thanks
Neeraj
--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a
Hi James,
Sorry for late reply. Thanks for your comments!
On 10/16/2020 9:57 PM, James Morse wrote:
Hi Neeraj,
On 15/10/2020 07:07, Neeraj Upadhyay wrote:
1. Looks like interrupt bind interface (SDEI_1_0_FN_SDEI_INTERRUPT_BIND) is not
available
for clients to use; can you please share
ction mess with queuing? Locking considerations,
of course!
Link: https://lore.kernel.org/rcu/20201112201547.gf3365...@moria.home.lan/
Reported-by: Kent Overstreet
Signed-off-by: Paul E. McKenney
---
Reviewed-by: Neeraj Upadhyay
Thanks
Neeraj
kernel/rcu/srcut
Tiny SRCU call_srcu() function into callback-queuing and
start-grace-period portions, with the latter in a new function named
srcu_gp_start_if_needed().
Link: https://lore.kernel.org/rcu/20201112201547.gf3365...@moria.home.lan/
Reported-by: Kent Overstreet
Signed-off-by: Paul E. McKenney
---
Reviewed-
Hi Paul,
On 11/17/2020 6:10 AM, paul...@kernel.org wrote:
From: "Paul E. McKenney"
There is a need for a polling interface for SRCU grace
periods, so this commit supplies get_state_synchronize_srcu(),
start_poll_synchronize_srcu(), and poll_state_synchronize_srcu() for this
purpose. The
On 11/17/2020 6:10 AM, paul...@kernel.org wrote:
From: "Paul E. McKenney"
There is a need for a polling interface for SRCU grace
periods, so this commit supplies get_state_synchronize_srcu(),
start_poll_synchronize_srcu(), and poll_state_synchronize_srcu() for this
purpose. The first can
On 11/21/2020 5:46 AM, Paul E. McKenney wrote:
On Fri, Nov 20, 2020 at 05:31:43PM +0530, Neeraj Upadhyay wrote:
On 11/17/2020 6:10 AM, paul...@kernel.org wrote:
From: "Paul E. McKenney"
There is a need for a polling interface for SRCU grace
periods, so this commi
On 11/21/2020 5:43 AM, Paul E. McKenney wrote:
On Fri, Nov 20, 2020 at 05:28:32PM +0530, Neeraj Upadhyay wrote:
Hi Paul,
On 11/17/2020 6:10 AM, paul...@kernel.org wrote:
From: "Paul E. McKenney"
There is a need for a polling interface for SRCU grace
periods, so this commi
Overstreet
[ paulmck: Add EXPORT_SYMBOL_GPL() per kernel test robot feedback. ]
[ paulmck: Apply feedback from Neeraj Upadhyay. ]
Link: https://lore.kernel.org/lkml/20201117004017.GA7444@paulmck-ThinkPad-P72/
Signed-off-by: Paul E. McKenney
---
include/linux/rcupdate.h | 2 ++
include/linux/srcu.
On 11/21/2020 6:29 AM, paul...@kernel.org wrote:
From: "Paul E. McKenney"
There is a need for a polling interface for SRCU grace periods. This
polling needs to distinguish between an SRCU instance being idle on the
one hand or in the middle of a grace period on the other. This commit
On 11/22/2020 11:31 PM, Paul E. McKenney wrote:
On Sun, Nov 22, 2020 at 07:57:26PM +0530, Neeraj Upadhyay wrote:
On 11/21/2020 5:43 AM, Paul E. McKenney wrote:
On Fri, Nov 20, 2020 at 05:28:32PM +0530, Neeraj Upadhyay wrote:
Hi Paul,
On 11/17/2020 6:10 AM, paul...@kernel.org wrote:
From
Overstreet
[ paulmck: Add EXPORT_SYMBOL_GPL() per kernel test robot feedback. ]
[ paulmck: Apply feedback from Neeraj Upadhyay. ]
Link: https://lore.kernel.org/lkml/20201117004017.GA7444@paulmck-ThinkPad-P72/
Signed-off-by: Paul E. McKenney
---
include/linux/rcupdate.h | 2 ++
include/linux/srcu.
Hi Paul,
On 11/12/2020 1:01 AM, Paul E. McKenney wrote:
On Wed, Nov 11, 2020 at 07:37:37PM +0530, Neeraj Upadhyay wrote:
For a new grace period request, RCU GP kthread transitions
through following states:
a. [RCU_GP_WAIT_GPS] -> [RCU_GP_DONE_GPS]
Initial state, where GP kthread wa
ich is done
in RCU_GP_WAIT_FQS) its possible that RCU kthread never wakes up.
Report the same from stall warnings, if GP thread is in RCU_GP_WAIT_FQS
state, and the timeout has elapsed and the kthread is not woken.
Signed-off-by: Neeraj Upadhyay
---
Changes in V2:
- Documentation update.
On 11/24/2020 2:42 AM, Paul E. McKenney wrote:
On Mon, Nov 23, 2020 at 10:13:13AM +0530, Neeraj Upadhyay wrote:
On 11/21/2020 6:29 AM, paul...@kernel.org wrote:
From: "Paul E. McKenney"
There is a need for a polling interface for SRCU grace
periods, so this commi
On 11/24/2020 1:25 AM, Paul E. McKenney wrote:
On Mon, Nov 23, 2020 at 10:01:13AM +0530, Neeraj Upadhyay wrote:
On 11/21/2020 6:29 AM, paul...@kernel.org wrote:
From: "Paul E. McKenney"
There is a need for a polling interface for SRCU grace periods. This
polling needs to d
Thanks Marc, Vladimir, Mark, Sudeep for your inputs!
Thanks
Neeraj
On 11/20/2020 3:43 PM, Mark Rutland wrote:
On Fri, Nov 20, 2020 at 09:09:00AM +, Vladimir Murzin wrote:
On 11/20/20 8:56 AM, Marc Zyngier wrote:
On 2020-11-20 04:30, Neeraj Upadhyay wrote:
Hi,
For ARM cortex A76, A77
Overstreet
[ paulmck: Add EXPORT_SYMBOL_GPL() per kernel test robot feedback. ]
[ paulmck: Apply feedback from Neeraj Upadhyay. ]
Link: https://lore.kernel.org/lkml/20201117004017.GA7444@paulmck-ThinkPad-P72/
Signed-off-by: Paul E. McKenney
---
For version in -rcu dev
Reviewed-by: Neeraj Upadhyay
547.gf3365...@moria.home.lan/
Reported-by: Kent Overstreet
Signed-off-by: Paul E. McKenney
---
Reviewed-by: Neeraj Upadhyay
Thanks
Neeraj
Documentation/RCU/Design/Requirements/Requirements.rst | 18 ++
1 file changed, 18 insertions(+)
diff --git a/Documentation/
On 11/28/2020 7:46 AM, Paul E. McKenney wrote:
On Wed, Nov 25, 2020 at 10:03:26AM +0530, Neeraj Upadhyay wrote:
On 11/24/2020 10:48 AM, Neeraj Upadhyay wrote:
On 11/24/2020 1:25 AM, Paul E. McKenney wrote:
On Mon, Nov 23, 2020 at 10:01:13AM +0530, Neeraj Upadhyay wrote:
On 11/21/2020
On 11/24/2020 10:48 AM, Neeraj Upadhyay wrote:
On 11/24/2020 1:25 AM, Paul E. McKenney wrote:
On Mon, Nov 23, 2020 at 10:01:13AM +0530, Neeraj Upadhyay wrote:
On 11/21/2020 6:29 AM, paul...@kernel.org wrote:
From: "Paul E. McKenney"
There is a need for a polling interfac
On 11/25/2020 1:00 AM, Paul E. McKenney wrote:
On Tue, Nov 24, 2020 at 10:44:24AM +0530, Neeraj Upadhyay wrote:
On 11/24/2020 2:42 AM, Paul E. McKenney wrote:
On Mon, Nov 23, 2020 at 10:13:13AM +0530, Neeraj Upadhyay wrote:
On 11/21/2020 6:29 AM, paul...@kernel.org wrote:
From: "
the capability is uniformly provided.
Signed-off-by: Neeraj Upadhyay
---
arch/arm64/kernel/cpu_errata.c | 16
arch/arm64/kernel/entry.S | 26 +-
2 files changed, 41 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel
Hi Marc,
On 7/9/19 6:38 PM, Marc Zyngier wrote:
Hi Neeraj,
On 09/07/2019 12:22, Neeraj Upadhyay wrote:
For cpus which do not support pstate.ssbs feature, el0
might not retain spsr.ssbs. This is problematic, if this
task migrates to a cpu supporting this feature, thus
relying on its state
Hi Paul,
On 9/24/2020 2:33 AM, Paul E. McKenney wrote:
On Wed, Sep 23, 2020 at 12:59:33PM +0530, Neeraj Upadhyay wrote:
Currently, for non-preempt kernels (with CONFIG_PREEMPTION=n),
rcu_blocking_is_gp() checks (with preemption disabled), whether
there is only one cpu online. It uses
Clarify the "x" in rcuox/N naming in RCU_NOCB_CPU config
description.
Signed-off-by: Neeraj Upadhyay
---
kernel/rcu/Kconfig | 11 ++-
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/kernel/rcu/Kconfig b/kernel/rcu/Kconfig
index b71e21f..5b22747 100644
--- a/
Hi Paul,
On 9/25/2020 4:29 AM, Paul E. McKenney wrote:
On Thu, Sep 24, 2020 at 12:04:10PM +0530, Neeraj Upadhyay wrote:
Clarify the "x" in rcuox/N naming in RCU_NOCB_CPU config
description.
Signed-off-by: Neeraj Upadhyay
Applied with a few additional updates as shown below.
gle)
---
Reviewed-by: Neeraj Upadhyay
include/linux/rcu_segcblist.h | 1 +
kernel/rcu/rcu_segcblist.c| 120 ++
kernel/rcu/rcu_segcblist.h| 2 -
3 files changed, 79 insertions(+), 44 deletions(-)
diff --git a/include/linux/rcu_segcblist.h b/include/li
.
Reviewed-by: Frederic Weisbecker
Suggested-by: Frederic Weisbecker
Signed-off-by: Joel Fernandes (Google)
---
Reviewed-by: Neeraj Upadhyay
kernel/rcu/srcutree.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
index
is minimal as
each segment's length is now stored in the respective segment.
Reviewed-by: Frederic Weisbecker
Reviewed-by: Neeraj Upadhyay
Signed-off-by: Joel Fernandes (Google)
---
include/trace/events/rcu.h | 25 +
kernel/rcu/rcu_segcblist.c | 34
Hi Paul,
On 9/23/2020 1:59 AM, Paul E. McKenney wrote:
On Tue, Sep 22, 2020 at 01:15:57AM +0530, Neeraj Upadhyay wrote:
Currently, for non-preempt kernels (with CONFIG_PREEMPTION=n),
rcu_blocking_is_gp() checks (with preemption disabled), whether
there is only one cpu online. It uses
ing gurantees. rcu_state.n_online_cpus
update from control cpu would result in unnecessary calls to
synchronize_rcu() slow path during the CPU-online process, but that
should have negligible impact.
Signed-off-by: Neeraj Upadhyay
---
Changes in V2:
- Make rcu_state.n_online_cpus int, instead of atomic_t.
-
On 01/18/2018 08:32 AM, Lai Jiangshan wrote:
On Wed, Jan 17, 2018 at 4:08 AM, Neeraj Upadhyay wrote:
On 01/16/2018 11:05 PM, Tejun Heo wrote:
Hello, Neeraj.
On Mon, Jan 15, 2018 at 02:08:12PM +0530, Neeraj Upadhyay wrote:
- kworker/0:0 gets chance to run on cpu1; while processing
On 01/16/2018 11:05 PM, Tejun Heo wrote:
Hello, Neeraj.
On Mon, Jan 15, 2018 at 02:08:12PM +0530, Neeraj Upadhyay wrote:
- kworker/0:0 gets chance to run on cpu1; while processing
a work, it goes to sleep. However, it does not decrement
pool->nr_running. This is because WORKER_REBO
On 6/25/19 2:28 PM, Linus Walleij wrote:
On Mon, Jun 17, 2019 at 11:35 AM Neeraj Upadhyay wrote:
From: Srinivas Ramana
Introduce the irq_enable callback which will be same as irq_unmask
except that it will also clear the status bit before unmask.
This will help in clearing any erroneous
that these unexpected interrupts gets cleared.
Signed-off-by: Srinivas Ramana
Signed-off-by: Neeraj Upadhyay
---
Changes since v2:
- Renamed function to msm_gpio_irq_clear_unmask()
drivers/pinctrl/qcom/pinctrl-msm.c | 25 -
1 file changed, 24 insertions(+), 1 deletion
Hi Marc,
On 7/9/19 7:52 PM, Marc Zyngier wrote:
On 09/07/2019 15:18, Neeraj Upadhyay wrote:
Hi Marc,
On 7/9/19 6:38 PM, Marc Zyngier wrote:
Hi Neeraj,
On 09/07/2019 12:22, Neeraj Upadhyay wrote:
For cpus which do not support pstate.ssbs feature, el0
might not retain spsr.ssbs
that these unexpected interrupts gets cleared.
Signed-off-by: Srinivas Ramana
Signed-off-by: Neeraj Upadhyay
---
Changes since v1:
- Extracted common code into __msm_gpio_irq_unmask().
drivers/pinctrl/qcom/pinctrl-msm.c | 25 -
1 file changed, 24 insertions(+), 1 deletion
Quoting tengf...@codeaurora.org (2019-06-11 03:41:26)
On 2019-06-10 22:51, Stephen Boyd wrote:
> Quoting Linus Walleij (2019-06-07 14:08:10)
>> On Fri, May 31, 2019 at 8:52 AM Tengfei Fan
>> wrote:
>> >> > The gpio interrupt status bit is getting set after the
>> > irq is disabled and
Thanks for the review, Linus.
On 6/17/19 5:20 PM, Linus Walleij wrote:
On Mon, Jun 17, 2019 at 12:35 PM Neeraj Upadhyay wrote:
Hi Stephen, there is one use case with is not covered by commit
b55326dc969e (
"pinctrl: msm: Really mask level interrupts to prevent latching"). That
ha
Hi,
I have one query regarding pseudo NMI support on GIC v3; from what I
could understand, GIC v3 supports pseudo NMI setup for SPIs and PPIs.
However the request_nmi() in irq framework requires NMI to be per cpu
interrupt source (it checks for IRQF_PERCPU). Can you please help
understand this
Hi Marc,
On 5/8/2020 4:15 PM, Marc Zyngier wrote:
On Thu, 07 May 2020 17:06:19 +0100,
Neeraj Upadhyay wrote:
Hi,
I have one query regarding pseudo NMI support on GIC v3; from what I
could understand, GIC v3 supports pseudo NMI setup for SPIs and PPIs.
However the request_nmi() in irq
Hi Marc,
On 5/8/2020 5:57 PM, Marc Zyngier wrote:
On Fri, 8 May 2020 16:36:42 +0530
Neeraj Upadhyay wrote:
Hi Marc,
On 5/8/2020 4:15 PM, Marc Zyngier wrote:
On Thu, 07 May 2020 17:06:19 +0100,
Neeraj Upadhyay wrote:
Hi,
I have one query regarding pseudo NMI support on GIC v3; from what
Hi Marc,
On 5/8/2020 6:23 PM, Marc Zyngier wrote:
On Fri, 8 May 2020 18:09:00 +0530
Neeraj Upadhyay wrote:
Hi Marc,
On 5/8/2020 5:57 PM, Marc Zyngier wrote:
On Fri, 8 May 2020 16:36:42 +0530
Neeraj Upadhyay wrote:
Hi Marc,
On 5/8/2020 4:15 PM, Marc Zyngier wrote:
On Thu, 07 May
Hi Marc,
Thanks a lot for your comments. I will work on exploring how SDEI can be
used for it.
Thanks
Neeraj
On 5/8/2020 9:41 PM, Marc Zyngier wrote:
On Fri, 08 May 2020 14:34:10 +0100,
Neeraj Upadhyay wrote:
Hi Marc,
On 5/8/2020 6:23 PM, Marc Zyngier wrote:
On Fri, 8 May 2020 18:09
On callback overload, we want to force quiescent state immediately,
for the first and second fqs. Enforce the same, by including
RCU_GP_FLAG_OVLD flag, in fqsstart check.
Signed-off-by: Neeraj Upadhyay
---
kernel/rcu/tree.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git
Hi Paul,
On 6/22/2020 1:20 AM, Paul E. McKenney wrote:
On Mon, Jun 22, 2020 at 12:07:27AM +0530, Neeraj Upadhyay wrote:
On callback overload, we want to force quiescent state immediately,
for the first and second fqs. Enforce the same, by including
RCU_GP_FLAG_OVLD flag, in fqsstart check
Hi Paul,
On 6/22/2020 8:43 AM, Paul E. McKenney wrote:
On Mon, Jun 22, 2020 at 01:30:31AM +0530, Neeraj Upadhyay wrote:
Hi Paul,
On 6/22/2020 1:20 AM, Paul E. McKenney wrote:
On Mon, Jun 22, 2020 at 12:07:27AM +0530, Neeraj Upadhyay wrote:
On callback overload, we want to force quiescent
o, cleanup the
code to avoid any confusion around the need for boosting,
for !CONFIG_PREMPT_RCU.
Signed-off-by: Neeraj Upadhyay
---
kernel/rcu/tree.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 6226bfb..57c904b 100644
--- a/kernel/
Hi Paul,
On 6/23/2020 4:23 AM, Paul E. McKenney wrote:
On Mon, Jun 22, 2020 at 09:16:24AM +0530, Neeraj Upadhyay wrote:
Hi Paul,
On 6/22/2020 8:43 AM, Paul E. McKenney wrote:
On Mon, Jun 22, 2020 at 01:30:31AM +0530, Neeraj Upadhyay wrote:
Hi Paul,
On 6/22/2020 1:20 AM, Paul E. McKenney
Hi Paul,
On 6/23/2020 4:48 AM, Paul E. McKenney wrote:
On Mon, Jun 22, 2020 at 11:37:03PM +0530, Neeraj Upadhyay wrote:
Remove CONFIG_PREMPT_RCU check in force_qs_rnp(). Originally,
this check was required to skip executing fqs failsafe
for rcu-sched, which was added in commit a77da14ce9af
dump_blkd_tasks() uses 10 as the max number of blocked
tasks, which are printed. However, it has an argument
which provides that number. So, use the argument value
instead. As all callers currently pass 10 as the number,
there isn't any impact.
Signed-off-by: Neeraj Upadhyay
---
kernel/rcu
Only unlock the root node, if current node (rnp) is not
root node.
Signed-off-by: Neeraj Upadhyay
---
kernel/rcu/tree_stall.h | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/kernel/rcu/tree_stall.h b/kernel/rcu/tree_stall.h
index f65a73a..0651833 100644
--- a/kernel/rcu
1 - 100 of 127 matches
Mail list logo