On Mon, Nov 16, 2020 at 03:37:29PM +, Matthew Wilcox wrote:
> > > Something I believe lockdep is missing is a way to annotate "This lock
> > > will be released by a softirq". If we had lockdep for lock_page(), this
> > > would be a great case to show off. The filesystem locks the page, then
On Mon, Nov 23, 2020 at 08:05:27PM +0900, Byungchul Park wrote:
> Hi,
>
> This patchset is too nasty to get reviewed in detail for now.
I worked Dept against mainline v5.9.
Thanks,
Byungchul
> This have:
>
>1. applying Dept to spinlock/mutex/rwlock/completion
>2.
On Mon, Nov 23, 2020 at 08:13:32PM +0900, Byungchul Park wrote:
> [0.995081] ===
> [0.995619] Dept: Circular dependency has been detected.
> [0.995816] 5.9.0+ #8 Tainted: GW
> [
iple reporting thanks to simple and quite
generalized design. Of course, false positive reports should be fixed
but it's no longer critical problem.
Signed-off-by: Byungchul Park
---
include/linux/dept.h| 495 +
include/linux/hardirq.h |3 +
include/linux/irqfla
Signed-off-by: Byungchul Park
---
arch/arm/mach-omap2/omap_hwmod.c | 6 +
arch/powerpc/platforms/powermac/low_i2c.c | 15 +++
block/blk-flush.c | 3 +++
block/blk.h| 1 +
drivers
Makes Dept able to track dependencies by rwlock.
Signed-off-by: Byungchul Park
---
include/linux/rwlock.h | 32 ++--
include/linux/rwlock_api_smp.h | 18 ++
include/linux/rwlock_types.h| 19 ---
kernel/locking
Makes Dept able to track dependencies by mutex families.
Signed-off-by: Byungchul Park
---
drivers/base/bus.c| 5 +-
drivers/base/class.c | 5 +-
drivers/gpu/drm/i915/gem/i915_gem_object.c| 5 +-
drivers/gpu/drm/i915/i915_active.c
Makes Dept able to track dependencies by
wait_for_completion()/complete().
Signed-off-by: Byungchul Park
---
include/linux/completion.h | 44
kernel/sched/completion.c | 16 ++--
2 files changed, 54 insertions(+), 6 deletions(-)
diff
Makes Dept able to track dependencies by spinlock.
Signed-off-by: Byungchul Park
---
include/linux/dept.h | 4 +++-
include/linux/llist.h| 9 +
include/linux/spinlock.h | 42
include/linux/spinlock_api_smp.h | 15
info: 3573 kB
[0.145875] per task-struct memory footprint: 1920 bytes
[0.146403] DEPendency Tracker: Copyright (c) 2020 LG Electronics, Inc.,
Byungchul Park
[0.147163] ... DEPT_MAX_STACK_ENTRY: 16
[0.147546] ... DEPT_MAX_WAIT_HIST : 16
[0.147929] ... DEPT_MAX_ECXT_HELD : 48
Hi,
This patchset is too nasty to get reviewed in detail for now.
This have:
1. applying Dept to spinlock/mutex/rwlock/completion
2. assigning custom keys or disable maps to avoid false positives
This doesn't have yet (but will be done):
1. proc interfaces e.g. to see dependecies the
On Mon, Nov 16, 2020 at 06:05:47PM +0900, Byungchul Park wrote:
> On Thu, Nov 12, 2020 at 11:58:44PM +0900, Byungchul Park wrote:
> > > > FYI, roughly Lockdep is doing:
> > > >
> > > >1. Dependency check
> > > >2. Lock usage correctness ch
On Thu, Nov 12, 2020 at 11:58:44PM +0900, Byungchul Park wrote:
> > > FYI, roughly Lockdep is doing:
> > >
> > >1. Dependency check
> > >2. Lock usage correctness check (including RCU)
> > >3. IRQ related usage correctness check with IRQFLAG
On Thu, Nov 12, 2020 at 02:52:51PM +, Matthew Wilcox wrote:
> On Thu, Nov 12, 2020 at 09:26:12AM -0500, Steven Rostedt wrote:
> > > FYI, roughly Lockdep is doing:
> > >
> > >1. Dependency check
> > >2. Lock usage correctness check (including RCU)
> > >3. IRQ related usage
On Thu, Nov 12, 2020 at 02:56:49PM +0100, Daniel Vetter wrote:
> > > I think I understand it. For things like completions and other "wait for
> > > events" we have lockdep annotation, but it is rather awkward to implement.
> > > Having something that says "lockdep_wait_event()" and
> > >
On Thu, Nov 12, 2020 at 11:28 PM Steven Rostedt wrote:
>
> On Thu, 12 Nov 2020 17:10:30 +0900
> Byungchul Park wrote:
>
> > 2. Does Lockdep do what a deadlock detection tool should do? From
> >internal engine to APIs, all the internal data structure and
> >
On Wed, Nov 11, 2020 at 09:36:09AM -0500, Steven Rostedt wrote:
> And this is especially true with lockdep, because lockdep only detects the
> deadlock, it doesn't tell you which lock was the incorrect locking.
>
> For example. If we have a locking chain of:
>
> A -> B -> D
>
> A -> C -> D
>
On Thu, Nov 12, 2020 at 05:51:14PM +0900, Byungchul Park wrote:
> On Thu, Nov 12, 2020 at 03:15:32PM +0900, Byungchul Park wrote:
> > > If on the other hand there's some bug in lockdep itself that causes
> > > excessive false positives, it's better to limit the number of repo
On Thu, Nov 12, 2020 at 03:15:32PM +0900, Byungchul Park wrote:
> > If on the other hand there's some bug in lockdep itself that causes
> > excessive false positives, it's better to limit the number of reports
> > to one per bootup, so that it's not seen as a nuisance debug
On Thu, Nov 12, 2020 at 12:16:50AM +0100, Thomas Gleixner wrote:
> Wrappers which make things simpler are always useful, but the lack of
> wrappers does not justify a wholesale replacement.
Totally right. Lack of wrappers doesn't matter at all. That could be
achieved easily by modifying the
On Wed, Nov 11, 2020 at 11:54:41AM +0100, Ingo Molnar wrote:
> > We cannot get reported other than the first one.
>
> Correct. Experience has shown that the overwhelming majority of
> lockdep reports are single-cause and single-report.
>
> This is an optimal approach, because after a decade of
Hello folks,
We have no choise but to use Lockdep to track dependencies for deadlock
detection with the current kernel. I'm wondering if they are satifsied
in that tool. Lockdep has too big problems to continue to use.
---
PROBLEM 1) First of all, Lockdep gets disabled on the first detection.
on Sep 17 00:00:00 2001
From: Byungchul Park
Date: Thu, 23 Jul 2020 18:42:05 +0900
Subject: [RFC] sched: Consider higher or lower sched_class tasks on
migration
Scheduler should avoid to migrate rt(or deadline) tasks to other CPUs
having higher sched_class tasks but it doesn't.
In addition
On Tue, Aug 13, 2019 at 07:53:49PM -0700, Paul E. McKenney wrote:
> On Wed, Aug 14, 2019 at 09:11:03AM +0900, Byungchul Park wrote:
> > On Tue, Aug 13, 2019 at 08:41:45AM -0700, Paul E. McKenney wrote:
> > > On Tue, Aug 13, 2019 at 02:29:54PM +0900, Byungchul Park wrote:
>
On Tue, Aug 13, 2019 at 08:41:45AM -0700, Paul E. McKenney wrote:
> On Tue, Aug 13, 2019 at 02:29:54PM +0900, Byungchul Park wrote:
> > On Mon, Aug 12, 2019 at 09:12:34AM -0400, Joel Fernandes wrote:
> > > On Mon, Aug 12, 2019 at 07:10:52PM +0900, Byungchul Park wrote:
> >
On Mon, Aug 12, 2019 at 09:12:34AM -0400, Joel Fernandes wrote:
> On Mon, Aug 12, 2019 at 07:10:52PM +0900, Byungchul Park wrote:
> > On Sun, Aug 11, 2019 at 04:49:39PM -0700, Paul E. McKenney wrote:
> > > Maybe. Note well that I said "potential issue". When I checked
On Sun, Aug 11, 2019 at 04:49:39PM -0700, Paul E. McKenney wrote:
> Maybe. Note well that I said "potential issue". When I checked a few
> years ago, none of the uses of rcu_barrier() cared about kfree_rcu().
> They cared instead about call_rcu() callbacks that accessed code or data
> that was
On Sun, Aug 11, 2019 at 05:36:26PM +0900, Byungchul Park wrote:
> On Thu, Aug 08, 2019 at 11:09:16AM -0700, Paul E. McKenney wrote:
> > On Thu, Aug 08, 2019 at 11:23:17PM +0900, Byungchul Park wrote:
> > > On Thu, Aug 8, 2019 at 9:56 PM Joel Fernandes
> > > wrote:
>
On Sun, Aug 11, 2019 at 05:36:26PM +0900, Byungchul Park wrote:
> On Thu, Aug 08, 2019 at 11:09:16AM -0700, Paul E. McKenney wrote:
> > On Thu, Aug 08, 2019 at 11:23:17PM +0900, Byungchul Park wrote:
> > > On Thu, Aug 8, 2019 at 9:56 PM Joel Fernandes
> > > wrote:
>
On Thu, Aug 08, 2019 at 11:09:16AM -0700, Paul E. McKenney wrote:
> On Thu, Aug 08, 2019 at 11:23:17PM +0900, Byungchul Park wrote:
> > On Thu, Aug 8, 2019 at 9:56 PM Joel Fernandes
> > wrote:
> > >
> > > On Thu, Aug 08, 2019 at 06:52:32PM +0900, Byungchul P
On Thu, Aug 8, 2019 at 9:56 PM Joel Fernandes wrote:
>
> On Thu, Aug 08, 2019 at 06:52:32PM +0900, Byungchul Park wrote:
> > On Wed, Aug 07, 2019 at 10:52:15AM -0700, Paul E. McKenney wrote:
> > > > > On Tue, Aug 06, 2019 at 05:20:40PM -0400, Joel Fernande
On Wed, Aug 07, 2019 at 05:45:04AM -0400, Joel Fernandes wrote:
> On Tue, Aug 06, 2019 at 04:56:31PM -0700, Paul E. McKenney wrote:
[snip]
> > On Tue, Aug 06, 2019 at 05:20:40PM -0400, Joel Fernandes (Google) wrote:
> > Of course, I am hoping that a later patch uses an array of pointers built
>
On Wed, Aug 07, 2019 at 10:52:15AM -0700, Paul E. McKenney wrote:
> On Wed, Aug 07, 2019 at 05:45:04AM -0400, Joel Fernandes wrote:
> > On Tue, Aug 06, 2019 at 04:56:31PM -0700, Paul E. McKenney wrote:
> > > On Tue, Aug 06, 2019 at 05:20:40PM -0400, Joel Fernandes (Google) wrote:
>
> [ . . . ]
>
On Tue, Jul 23, 2019 at 06:47:17AM -0700, Paul E. McKenney wrote:
> On Tue, Jul 23, 2019 at 08:05:21PM +0900, Byungchul Park wrote:
> > On Fri, Jul 19, 2019 at 04:33:56PM -0400, Joel Fernandes wrote:
> > > On Fri, Jul 19, 2019 at 3:57 PM Paul E. McKenney
> > > wrote:
On Tue, Jul 23, 2019 at 09:54:03AM -0700, Paul E. McKenney wrote:
> On Tue, Jul 23, 2019 at 06:47:17AM -0700, Paul E. McKenney wrote:
> > On Tue, Jul 23, 2019 at 08:05:21PM +0900, Byungchul Park wrote:
> > > On Fri, Jul 19, 2019 at 04:33:56PM -0400, Joel Fernandes wrote:
>
On Fri, Jul 19, 2019 at 04:33:56PM -0400, Joel Fernandes wrote:
> On Fri, Jul 19, 2019 at 3:57 PM Paul E. McKenney
> wrote:
> >
> > On Fri, Jul 19, 2019 at 06:57:58PM +0900, Byungchul Park wrote:
> > > On Fri, Jul 19, 2019 at 4:43 PM Paul E. McKenney
> > > w
On Fri, Jul 19, 2019 at 4:43 PM Paul E. McKenney wrote:
>
> On Thu, Jul 18, 2019 at 08:52:52PM -0400, Joel Fernandes wrote:
> > On Thu, Jul 18, 2019 at 8:40 PM Byungchul Park
> > wrote:
> > [snip]
> > > > - There is a bug in the CPU stopper machinery itself p
On Thu, Jul 18, 2019 at 08:52:52PM -0400, Joel Fernandes wrote:
> On Thu, Jul 18, 2019 at 8:40 PM Byungchul Park wrote:
> [snip]
> > > - There is a bug in the CPU stopper machinery itself preventing it
> > > from scheduling the stopper on Y. Even though Y is not holding u
On Thu, Jul 18, 2019 at 02:34:19PM -0700, Paul E. McKenney wrote:
> On Thu, Jul 18, 2019 at 12:14:22PM -0400, Joel Fernandes wrote:
> > Trimming the list a bit to keep my noise level low,
> >
> > On Sat, Jul 13, 2019 at 1:41 PM Paul E. McKenney
> > wrote:
> > [snip]
> > > > It still feels like
On Thu, Jul 18, 2019 at 12:14:22PM -0400, Joel Fernandes wrote:
> Trimming the list a bit to keep my noise level low,
>
> On Sat, Jul 13, 2019 at 1:41 PM Paul E. McKenney
> wrote:
> [snip]
> > > It still feels like you guys are hyperfocusing on this one particular
> > > > knob. I instead need
t; > On Sat, Jul 13, 2019 at 4:47 AM Byungchul Park
> > > > wrote:
> > > > >
> > > > > On Fri, Jul 12, 2019 at 9:51 PM Joel Fernandes
> > > > > wrote:
> > > > > >
> > > > > > On Fri, Jul 12
On Fri, Jul 12, 2019 at 2:50 PM Byungchul Park wrote:
>
> On Thu, Jul 11, 2019 at 05:30:52AM -0700, Paul E. McKenney wrote:
> > > > If there is a real need, something needs to be provided to meet that
> > > > need. But in the absence of a real n
On Fri, Jul 12, 2019 at 9:51 PM Joel Fernandes wrote:
>
> On Fri, Jul 12, 2019 at 03:32:40PM +0900, Byungchul Park wrote:
> > On Thu, Jul 11, 2019 at 03:58:39PM -0400, Joel Fernandes wrote:
> > > Hmm, speaking of grace period durations, it seems to me the maximum gr
On Thu, Jul 11, 2019 at 03:58:39PM -0400, Joel Fernandes wrote:
> Hmm, speaking of grace period durations, it seems to me the maximum grace
> period ever is recorded in rcu_state.gp_max. However it is not read from
> anywhere.
>
> Any idea why it was added but not used?
>
> I am interested in
On Thu, Jul 11, 2019 at 08:02:15AM -0700, Paul E. McKenney wrote:
> These would be the tunables controlling how quickly RCU takes its
> various actions to encourage the current grace period to end quickly.
Seriously one of the most interesting thing over all kernel works.
> I would be happy to
On Thu, Jul 11, 2019 at 09:08:49AM -0400, Joel Fernandes wrote:
> > Finally, I urge you to join with Joel Fernandes and go through these
> > grace-period-duration tuning parameters. Once you guys get your heads
> > completely around all of them and how they interact across the different
> >
On Thu, Jul 11, 2019 at 05:30:52AM -0700, Paul E. McKenney wrote:
> > > If there is a real need, something needs to be provided to meet that
> > > need. But in the absence of a real need, past experience has shown
> > > that speculative tuning knobs usually do more harm than good. ;-)
> >
> >
On Tue, Jul 09, 2019 at 05:41:02AM -0700, Paul E. McKenney wrote:
> > Hi Paul,
> >
> > IMHO, as much as we want to tune the time for fqs to be initiated, we
> > can also want to tune the time for the help from scheduler to start.
> > I thought only difference between them is a level of urgency. I
On Tue, Jul 09, 2019 at 02:58:16PM +0900, Byungchul Park wrote:
> On Mon, Jul 08, 2019 at 09:03:59AM -0400, Joel Fernandes wrote:
> > > Actually, the intent was to only allow this to be changed at boot time.
> > > Of course, if there is now a good reason t
On Mon, Jul 08, 2019 at 06:19:42AM -0700, Paul E. McKenney wrote:
> On Mon, Jul 08, 2019 at 09:03:59AM -0400, Joel Fernandes wrote:
> > Good morning!
> >
> > On Mon, Jul 08, 2019 at 05:50:13AM -0700, Paul E. McKenney wrote:
> > > On Mon, Jul 08, 2019 at 03:00:09
> adjust_jiffies_till_sched_qs() will be called only if
> > > the value from sysfs != ULONG_MAX. And the value won't be adjusted
> > > unlike first/next fqs jiffies.
> > >
> > > While at it, changed the positions of two module_param()s downward.
> > >
only if
the value from sysfs != ULONG_MAX. And the value won't be adjusted
unlike first/next fqs jiffies.
While at it, changed the positions of two module_param()s downward.
Signed-off-by: Byungchul Park
---
kernel/rcu/tree.c | 22 --
1 file changed, 20 insertions(+), 2
On Thu, Jul 04, 2019 at 10:40:44AM -0700, Paul E. McKenney wrote:
> On Thu, Jul 04, 2019 at 12:34:30AM -0400, Joel Fernandes (Google) wrote:
> > It is possible that the rcuperf kernel test runs concurrently with init
> > starting up. During this time, the system is running all grace periods
> >
On Tue, Jul 02, 2019 at 01:45:02AM -0700, Paul E. McKenney wrote:
> On Tue, Jul 02, 2019 at 02:11:03PM +0900, Byungchul Park wrote:
> > On Mon, Jul 01, 2019 at 01:12:16PM -0700, Paul E. McKenney wrote:
> > > On Mon, Jul 01, 2019 at 09:40:39AM +0900, Byungchul Park wr
On Mon, Jul 01, 2019 at 01:12:16PM -0700, Paul E. McKenney wrote:
> On Mon, Jul 01, 2019 at 09:40:39AM +0900, Byungchul Park wrote:
> > Hello,
> >
> > I tested again if the WARN_ON_ONCE() is fired with my box.
> >
> > And it was OK.
> >
> > Thank
title and commit message a bit.
---8<---
>From 20c934c5657a7a0f13ebb050ffd350d4174965d0 Mon Sep 17 00:00:00 2001
From: Byungchul Park
Date: Mon, 1 Jul 2019 09:27:15 +0900
Subject: [PATCH v3] rcu: Change return type of rcu_spawn_one_boost_kthread()
The return value of rcu_spawn_one_boost_k
On Sun, Jun 30, 2019 at 12:38:34PM -0700, Paul E. McKenney wrote:
> On Fri, Jun 28, 2019 at 11:43:39AM +0900, Byungchul Park wrote:
> > On Thu, Jun 27, 2019 at 01:57:03PM -0700, Paul E. McKenney wrote:
> > > On Thu, Jun 27, 2019 at 09:42:40AM -0400, Joel Fernandes wrote:
>
On Fri, Jun 28, 2019 at 11:44:11AM -0400, Steven Rostedt wrote:
> On Fri, 28 Jun 2019 19:40:45 +0900
> Byungchul Park wrote:
>
> > Wait.. I got a little bit confused on recordering.
> >
> > This 'STORE rcu_read_lock_nesting = 0' can happen before
> > 'STORE r
On Fri, Jun 28, 2019 at 04:31:38PM +0900, Byungchul Park wrote:
> On Thu, Jun 27, 2019 at 01:36:12PM -0700, Paul E. McKenney wrote:
> > On Thu, Jun 27, 2019 at 03:17:27PM -0500, Scott Wood wrote:
> > > On Thu, 2019-06-27 at 11:41 -0700, Paul E. McKenney wrote:
> > > >
On Fri, Jun 28, 2019 at 06:10:42PM +0900, Byungchul Park wrote:
> On Fri, Jun 28, 2019 at 04:43:50PM +0900, Byungchul Park wrote:
> > On Fri, Jun 28, 2019 at 04:31:38PM +0900, Byungchul Park wrote:
> > > On Thu, Jun 27, 2019 at 01:36:12PM -0700, Paul E. McKenney wrote:
> >
On Fri, Jun 28, 2019 at 04:43:50PM +0900, Byungchul Park wrote:
> On Fri, Jun 28, 2019 at 04:31:38PM +0900, Byungchul Park wrote:
> > On Thu, Jun 27, 2019 at 01:36:12PM -0700, Paul E. McKenney wrote:
> > > On Thu, Jun 27, 2019 at 03:17:27PM -0500, Scott Wood wrote:
> > >
On Fri, Jun 28, 2019 at 05:14:32PM +0900, Byungchul Park wrote:
> On Fri, Jun 28, 2019 at 04:43:50PM +0900, Byungchul Park wrote:
> > On Fri, Jun 28, 2019 at 04:31:38PM +0900, Byungchul Park wrote:
> > > On Thu, Jun 27, 2019 at 01:36:12PM -0700, Paul E. McKenney wrote:
> >
On Fri, Jun 28, 2019 at 04:43:50PM +0900, Byungchul Park wrote:
> On Fri, Jun 28, 2019 at 04:31:38PM +0900, Byungchul Park wrote:
> > On Thu, Jun 27, 2019 at 01:36:12PM -0700, Paul E. McKenney wrote:
> > > On Thu, Jun 27, 2019 at 03:17:27PM -0500, Scott Wood wrote:
> > >
On Fri, Jun 28, 2019 at 04:31:38PM +0900, Byungchul Park wrote:
> On Thu, Jun 27, 2019 at 01:36:12PM -0700, Paul E. McKenney wrote:
> > On Thu, Jun 27, 2019 at 03:17:27PM -0500, Scott Wood wrote:
> > > On Thu, 2019-06-27 at 11:41 -0700, Paul E. McKenney wrote:
> > > >
On Thu, Jun 27, 2019 at 01:36:12PM -0700, Paul E. McKenney wrote:
> On Thu, Jun 27, 2019 at 03:17:27PM -0500, Scott Wood wrote:
> > On Thu, 2019-06-27 at 11:41 -0700, Paul E. McKenney wrote:
> > > On Thu, Jun 27, 2019 at 02:16:38PM -0400, Joel Fernandes wrote:
> > > >
> > > > I think the fix
On Thu, Jun 27, 2019 at 01:57:03PM -0700, Paul E. McKenney wrote:
> On Thu, Jun 27, 2019 at 09:42:40AM -0400, Joel Fernandes wrote:
> > On Thu, Jun 27, 2019 at 04:07:46PM +0900, Byungchul Park wrote:
> > > Hello,
> > >
> > > I tested if the WARN_ON_ONCE(
1
From: Byungchul Park
Date: Thu, 27 Jun 2019 15:58:10 +0900
Subject: [PATCH v2] rcu: Change return type of rcu_spawn_one_boost_kthread()
The return value of rcu_spawn_one_boost_kthread() is not used any
longer. Change return type of that function from int to void.
Signed-off-by: Byungchu
On Wed, Jun 26, 2019 at 09:30:45AM -0700, Paul E. McKenney wrote:
> On Wed, Jun 26, 2019 at 11:51:20AM +0900, Byungchul Park wrote:
> > On Tue, Jun 25, 2019 at 06:31:15AM -0700, Paul E. McKenney wrote:
> > > On Tue, Jun 25, 2019 at 11:41:00AM +0900, Byungchul Park wrote:
>
On Tue, Jun 25, 2019 at 06:31:15AM -0700, Paul E. McKenney wrote:
> On Tue, Jun 25, 2019 at 11:41:00AM +0900, Byungchul Park wrote:
> > On Mon, Jun 24, 2019 at 10:25:51AM -0700, Paul E. McKenney wrote:
> > > On Mon, Jun 24, 2019 at 12:46:24PM -0400, Joel Fernandes wrote:
>
On Mon, Jun 24, 2019 at 12:46:24PM -0400, Joel Fernandes wrote:
> On Mon, Jun 24, 2019 at 05:27:32PM +0900, Byungchul Park wrote:
> > Hello rcu folks,
> >
> > I thought it'd better to announce it if those spawnings fail because of
> > !rcu_scheduler_fully
On Mon, Jun 24, 2019 at 10:25:51AM -0700, Paul E. McKenney wrote:
> On Mon, Jun 24, 2019 at 12:46:24PM -0400, Joel Fernandes wrote:
> > On Mon, Jun 24, 2019 at 05:27:32PM +0900, Byungchul Park wrote:
> > > Hello rcu folks,
> > >
> > > I thought it'd better to
1
From: Byungchul Park
Date: Mon, 24 Jun 2019 17:08:26 +0900
Subject: [RFC] rcu: Warn that rcu ktheads cannot be spawned
In case that rcu ktheads cannot be spawned due to
!rcu_scheduler_fully_active, it'd be better to anounce it.
While at it, because the return value of rcu_spawn_one_boost_k
On Mon, Mar 11, 2019 at 09:39:39AM -0400, Joel Fernandes wrote:
> On Wed, Aug 29, 2018 at 03:20:34PM -0700, Paul E. McKenney wrote:
> > RCU's dyntick-idle code is written to tolerate half-interrupts, that it,
> > either an interrupt that invokes rcu_irq_enter() but never invokes the
> >
On 03/15/2019 04:31 PM, Byungchul Park wrote:
On Mon, Mar 11, 2019 at 09:39:39AM -0400, Joel Fernandes wrote:
On Wed, Aug 29, 2018 at 03:20:34PM -0700, Paul E. McKenney wrote:
RCU's dyntick-idle code is written to tolerate half-interrupts, that it,
either an interrupt that invokes
> > rcu_dynticks_eqs_exit(), and this patch does not change that. Before the
> > call to rcu_dynticks_eqs_exit(), RCU is not yet watching the current
> > CPU and after that call RCU is watching.
> >
> > A similar switch in calling order happens on the idle-entry p
> > rcu_dynticks_eqs_exit(), and this patch does not change that. Before the
> > call to rcu_dynticks_eqs_exit(), RCU is not yet watching the current
> > CPU and after that call RCU is watching.
> >
> > A similar switch in calling order happens on the idle-entry p
On Wed, Aug 22, 2018 at 09:07:23AM +0200, Johannes Berg wrote:
> On Wed, 2018-08-22 at 14:47 +0900, Byungchul Park wrote:
> > On Wed, Aug 22, 2018 at 06:02:23AM +0200, Johannes Berg wrote:
> > > On Wed, 2018-08-22 at 11:45 +0900, Byungchul Park wrote:
> > >
> >
On Wed, Aug 22, 2018 at 09:07:23AM +0200, Johannes Berg wrote:
> On Wed, 2018-08-22 at 14:47 +0900, Byungchul Park wrote:
> > On Wed, Aug 22, 2018 at 06:02:23AM +0200, Johannes Berg wrote:
> > > On Wed, 2018-08-22 at 11:45 +0900, Byungchul Park wrote:
> > >
> >
On Wed, Aug 01, 2018 at 01:43:10PM +0800, Huang, Ying wrote:
> Byungchul Park writes:
>
> > I think rcu list also works well. But I decided to use llist because
> > llist is simpler and has one less pointer.
> >
> > Just to be sure, let me explain my use case
On Wed, Aug 01, 2018 at 01:43:10PM +0800, Huang, Ying wrote:
> Byungchul Park writes:
>
> > I think rcu list also works well. But I decided to use llist because
> > llist is simpler and has one less pointer.
> >
> > Just to be sure, let me explain my use case
On Tue, Jul 31, 2018 at 09:46:16AM -0400, Steven Rostedt wrote:
> On Tue, 31 Jul 2018 18:38:09 +0900
> Byungchul Park wrote:
>
> > On Tue, Jul 31, 2018 at 10:52:57AM +0200, Peter Zijlstra wrote:
> > > On Tue, Jul 31, 2018 at 09:58:36AM +0900, Byungchul Park wrote:
>
On Tue, Jul 31, 2018 at 07:30:52AM -0700, Paul E. McKenney wrote:
> On Tue, Jul 31, 2018 at 06:29:50PM +0900, Byungchul Park wrote:
> > On Mon, Jul 30, 2018 at 09:30:42PM -0700, Paul E. McKenney wrote:
> > > On Tue, Jul 31, 2018 at 09:58:36AM +0900, Byungchul Park wrote:
&
On Tue, Jul 31, 2018 at 09:46:16AM -0400, Steven Rostedt wrote:
> On Tue, 31 Jul 2018 18:38:09 +0900
> Byungchul Park wrote:
>
> > On Tue, Jul 31, 2018 at 10:52:57AM +0200, Peter Zijlstra wrote:
> > > On Tue, Jul 31, 2018 at 09:58:36AM +0900, Byungchul Park wrote:
>
On Tue, Jul 31, 2018 at 07:30:52AM -0700, Paul E. McKenney wrote:
> On Tue, Jul 31, 2018 at 06:29:50PM +0900, Byungchul Park wrote:
> > On Mon, Jul 30, 2018 at 09:30:42PM -0700, Paul E. McKenney wrote:
> > > On Tue, Jul 31, 2018 at 09:58:36AM +0900, Byungchul Park wrote:
&
On Tue, Jul 31, 2018 at 10:52:57AM +0200, Peter Zijlstra wrote:
> On Tue, Jul 31, 2018 at 09:58:36AM +0900, Byungchul Park wrote:
> > In restrictive cases like only addtions happen but never deletion, can't
> > we safely traverse a llist? I believe llist can be more useful if we
On Tue, Jul 31, 2018 at 10:52:57AM +0200, Peter Zijlstra wrote:
> On Tue, Jul 31, 2018 at 09:58:36AM +0900, Byungchul Park wrote:
> > In restrictive cases like only addtions happen but never deletion, can't
> > we safely traverse a llist? I believe llist can be more useful if we
On Mon, Jul 30, 2018 at 09:30:42PM -0700, Paul E. McKenney wrote:
> On Tue, Jul 31, 2018 at 09:58:36AM +0900, Byungchul Park wrote:
> > Hello folks,
> >
> > I'm careful in saying.. and curious about..
> >
> > In restrictive cases like only addtions happen but ne
On Mon, Jul 30, 2018 at 09:30:42PM -0700, Paul E. McKenney wrote:
> On Tue, Jul 31, 2018 at 09:58:36AM +0900, Byungchul Park wrote:
> > Hello folks,
> >
> > I'm careful in saying.. and curious about..
> >
> > In restrictive cases like only addtions happen but ne
On Tue, Jul 31, 2018 at 09:37:50AM +0800, Huang, Ying wrote:
> Byungchul Park writes:
>
> > Hello folks,
> >
> > I'm careful in saying.. and curious about..
> >
> > In restrictive cases like only addtions happen but never deletion, can't
> > we sa
On Tue, Jul 31, 2018 at 09:37:50AM +0800, Huang, Ying wrote:
> Byungchul Park writes:
>
> > Hello folks,
> >
> > I'm careful in saying.. and curious about..
> >
> > In restrictive cases like only addtions happen but never deletion, can't
> > we sa
from a head. Or
just use existing funtion with head->first.
Thank a lot for your answers in advance :)
->8-
>From 1e73882799b269cd86e7a7c955021e3a18d1e6cf Mon Sep 17 00:00:00 2001
From: Byungchul Park
Date: Tue, 31 Jul 2018 09:31:57 +0900
Subject: [QUESTION] llist: Comment releas
from a head. Or
just use existing funtion with head->first.
Thank a lot for your answers in advance :)
->8-
>From 1e73882799b269cd86e7a7c955021e3a18d1e6cf Mon Sep 17 00:00:00 2001
From: Byungchul Park
Date: Tue, 31 Jul 2018 09:31:57 +0900
Subject: [QUESTION] llist: Comment releas
On Tue, Jun 19, 2018 at 02:16:36PM +0900, Byungchul Park wrote:
> On Tue, Jun 19, 2018 at 6:42 AM, Steven Rostedt wrote:
> > On Mon, 18 Jun 2018 13:58:09 +0900
> > Byungchul Park wrote:
> >
> >> Hello Steven,
> >>
> >> I've changed the cod
On Tue, Jun 19, 2018 at 02:16:36PM +0900, Byungchul Park wrote:
> On Tue, Jun 19, 2018 at 6:42 AM, Steven Rostedt wrote:
> > On Mon, 18 Jun 2018 13:58:09 +0900
> > Byungchul Park wrote:
> >
> >> Hello Steven,
> >>
> >> I've changed the cod
On Fri, Jun 22, 2018 at 01:05:48PM -0700, Joel Fernandes wrote:
> On Fri, Jun 22, 2018 at 02:32:47PM -0400, Steven Rostedt wrote:
> > On Fri, 22 Jun 2018 11:19:16 -0700
> > Joel Fernandes wrote:
> >
> > > Sure. So in a later thread you mentioned "usermode helpers". I took a
> > > closer
> > >
On Fri, Jun 22, 2018 at 01:05:48PM -0700, Joel Fernandes wrote:
> On Fri, Jun 22, 2018 at 02:32:47PM -0400, Steven Rostedt wrote:
> > On Fri, 22 Jun 2018 11:19:16 -0700
> > Joel Fernandes wrote:
> >
> > > Sure. So in a later thread you mentioned "usermode helpers". I took a
> > > closer
> > >
On Sat, Jun 23, 2018 at 10:49:54AM -0700, Paul E. McKenney wrote:
> On Fri, Jun 22, 2018 at 03:23:51PM +0900, Byungchul Park wrote:
> > On Fri, Jun 22, 2018 at 03:12:06PM +0900, Byungchul Park wrote:
> > > When passing through irq or NMI contexts, the current code uses
> >
On Sat, Jun 23, 2018 at 10:49:54AM -0700, Paul E. McKenney wrote:
> On Fri, Jun 22, 2018 at 03:23:51PM +0900, Byungchul Park wrote:
> > On Fri, Jun 22, 2018 at 03:12:06PM +0900, Byungchul Park wrote:
> > > When passing through irq or NMI contexts, the current code uses
> >
by: Paul E. McKenney
Signed-off-by: Byungchul Park
---
kernel/rcu/tree.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 4ed74f1..6c5a7f0 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -774,6 +774,7 @@ void rcu_user_enter(void)
/**
*
by: Paul E. McKenney
Signed-off-by: Byungchul Park
---
kernel/rcu/tree.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 4ed74f1..6c5a7f0 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -774,6 +774,7 @@ void rcu_user_enter(void)
/**
*
1 - 100 of 2446 matches
Mail list logo