ping
On 05/12/2015 08:32 PM, Lai Jiangshan wrote:
Hi,
This is the V2 version of the V1 pathset. But it is just the updated
version of the patch12 of the V1 patchset.
[1/5 V1] is split into [1/7 V2] and [2/7 V2].
[2/5 V1] is split into [3,4,5,6,7/7 V2].
[1/7] extends the wq_pool_mutex
as TJ's suguested.
Thanks,
Lai
Cc: Tejun Heo t...@kernel.org
Lai Jiangshan (7):
workqueue: wq_pool_mutex protects the attrs-installation
workqueue: simplify wq_update_unbound_numa()
workqueue: introduce get_pwq_unlocked()
workqueue: reuse the current per-node pwq when its attrs unchanged
.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 20 +---
1 file changed, 5 insertions(+), 15 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index f02b8ad..c8b9de0 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -3708,7
a preparation patch for next several patches which read
wq-unbound_attrs, wq-numa_pwq_tbl[] and wq-dfl_pwq with
only wq_pool_mutex held.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 27 ---
1 file changed, 20 insertions(+), 7 deletions(-)
diff --git
-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 33 ++---
1 file changed, 22 insertions(+), 11 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index c8b9de0..0fa352d 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1067,6
tmp_attrs is just temporary attrs, we can use wq_update_unbound_numa_attrs_buf
for it like wq_update_unbound_numa();
This change also avoids frequently alloc/free the tmp_attrs when
the low level cpumask is being updated.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c
this change, ctx-dfl_pwq-refcnt++ could be dangerous
when ctx-dfl_pwq is being reused, so we use get_pwq_unlocked() instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/kernel/workqueue.c b
If the cpuamsk is changed, it is possible that only a part of the per-node
pwq is affected. This can happen when the user changes the cpumask of
a workqueue or the low level cpumask.
So we try to reuse the current per-node pwq when its attrs unchanged.
Signed-off-by: Lai Jiangshan la
is reused. Comparing to the old behavior,
wq_update_unbound_numa() introduces 3 pairs of lock()/unlock()
operations and overhead when the pwq is unchanged. Although
cpu-hotplug is cold path, but this case is likely true in
the cpu-hotplug path.
Signed-off-by: Lai
().
The apply_wqattrs_[un]lock() will be also used on later patch for
ensuring attrs changes are properly synchronized.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 82 --
1 file changed, 49 insertions(+), 33 deletions
() |
It results that the Process B's operation is totally reverted
without any notification, it is a buggy behavior. So this patch
moves wq_sysfs_prep_attrs() into the protection under wq_pool_mutex
to ensure attrs changes are properly synchronized.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
Current modification to attrs via sysfs is not fully synchronized.
So this patch separates out and refactors the locking and
ensures attrs changes are properly synchronized.
changed from v1
just split the patch
Cc: Tejun Heo t...@kernel.org
Lai Jiangshan (2):
workqueue: separate out
On 04/06/2015 11:53 PM, Tejun Heo wrote:
On Thu, Apr 02, 2015 at 07:14:42PM +0800, Lai Jiangshan wrote:
/* make a copy of @attrs and sanitize it */
copy_workqueue_attrs(new_attrs, attrs);
-cpumask_and(new_attrs-cpumask, new_attrs-cpumask,
wq_unbound_global_cpumask
On 04/07/2015 09:58 AM, Tejun Heo wrote:
Hello, Lai.
On Tue, Apr 07, 2015 at 09:25:59AM +0800, Lai Jiangshan wrote:
On 04/06/2015 11:53 PM, Tejun Heo wrote:
On Thu, Apr 02, 2015 at 07:14:42PM +0800, Lai Jiangshan wrote:
/* make a copy of @attrs and sanitize it */
copy_workqueue_attrs
to cpu_possible_mask.
Cc: Christoph Lameter c...@linux.com
Cc: Kevin Hilman khil...@linaro.org
Cc: Lai Jiangshan la...@cn.fujitsu.com
Cc: Mike Galbraith bitbuc...@online.de
Cc: Paul E. McKenney paul...@linux.vnet.ibm.com
Cc: Tejun Heo t...@kernel.org
Cc: Viresh Kumar viresh.ku...@linaro.org
Signed-off
needed.
Cc: Christoph Lameter c...@linux.com
Cc: Kevin Hilman khil...@linaro.org
Cc: Lai Jiangshan la...@cn.fujitsu.com
Cc: Mike Galbraith bitbuc...@online.de
Cc: Paul E. McKenney paul...@linux.vnet.ibm.com
Cc: Tejun Heo t...@kernel.org
Cc: Viresh Kumar viresh.ku...@linaro.org
Cc: Frederic Weisbecker
into wq_pool_mutex.
this is needed to avoid to do the further splitting.
Suggested-by: Tejun Heo t...@kernel.org
Cc: Christoph Lameter c...@linux.com
Cc: Kevin Hilman khil...@linaro.org
Cc: Lai Jiangshan la...@cn.fujitsu.com
Cc: Mike Galbraith bitbuc...@online.de
Cc: Paul E. McKenney paul
On 04/09/2015 04:14 PM, Peter Zijlstra wrote:
On Thu, Apr 09, 2015 at 04:09:07PM +0800, Lai Jiangshan wrote:
On 04/09/2015 12:48 AM, Peter Zijlstra wrote:
+
+struct latch_tree_node {
+ /*
+* Because we have an array of two entries in struct latch_tree_nodes
+* it's not possible
On 04/09/2015 12:48 AM, Peter Zijlstra wrote:
+static void module_assert_mutex_or_preempt(void)
+{
+#ifdef CONFIG_LOCKDEP
+ int rcu_held = rcu_read_lock_sched_held();
+ int mutex_held = 1;
+
+ if (debug_locks)
+ mutex_held = lockdep_is_held(module_mutex);
+
+
On 04/09/2015 12:48 AM, Peter Zijlstra wrote:
+
+struct latch_tree_node {
+ /*
+ * Because we have an array of two entries in struct latch_tree_nodes
+ * it's not possible to use container_of() to get back to the
+ * encapsulating structure; therefore we have to put in a
patch12 are simple cleaups and reflect to recently changes.
patch3 just moves code.
Cc: Tejun Heo t...@kernel.org
Lai Jiangshan (3):
workqueue: remove the declaration of copy_workqueue_attrs()
workqueue: remove the lock from wq_sysfs_prep_attrs()
workqueue: move flush_scheduled_work
Reading to wq-unbound_attrs requires protection of either wq_pool_mutex
or wq-mutex, and wq_sysfs_prep_attrs() is called with wq_pool_mutex held,
so we don't need to grab wq-mutex here.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 4 ++--
1 file changed, 2
flush_scheduled_work() is just a simple call to flush_work().
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
include/linux/workqueue.h | 30 +-
kernel/workqueue.c| 30 --
2 files changed, 29 insertions(+), 31 deletions
This pre-declaration was unneeded since a previous refactor patch
6ba94429c8e7 (workqueue: Reorder sysfs code).
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index ee5bf95
this change, ctx-dfl_pwq-refcnt++ could be dangerous
when ctx-dfl_pwq is a reusing pwq which may be receiving work items
or processing work items and hurts concurrency [get|put]_pwq(),
so we use get_pwq_unlocked() instead.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 16
not replement it in future
Cc: Tejun Heo t...@kernel.org
Lai Jiangshan (4):
workqueue: introduce get_pwq_unlocked()
workqueue: reuse the current per-node pwq when its attrs are unchanged
workqueue: reuse the current default pwq when its attrs unchanged
workqueue: reuse
already made the current pwq be
reused when its attrs are unaffected, but we move the code of fetching
current pwq closer to the code of testing.
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 16
1 file changed, 12 insertions(+), 4 deletions(-)
diff
-by: Lai Jiangshan la...@cn.fujitsu.com
---
kernel/workqueue.c | 31 ---
1 file changed, 20 insertions(+), 11 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index afe7c53..6aa9bd5 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1065,6 +1065,20
().
The comment for using wq_calc_node_attrs_buf in wq_update_unbound_numa()
is also moved to the defination of the wq_calc_node_attrs_buf.
This change also avoids frequently alloc/free the tmp_attrs for every
workqueue when the low level cpumask is being updated.
Signed-off-by: Lai Jiangshan la
...@linux.vnet.ibm.com
Cc: Tejun Heo t...@kernel.org
Cc: Lai Jiangshan la...@cn.fujitsu.com
Cc: Mel Gorman mgor...@suse.de
Cc: linux-kernel@vger.kernel.org
---
kernel/kthread.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/kernel/kthread.c b/kernel/kthread.c
index
Update my email address.
The old la...@cn.fujitsu.com will ended after Jul 10 2015.
Signed-off-by: Lai Jiangshan jiangshan...@gmail.com
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
MAINTAINERS | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/MAINTAINERS b
I am one of the dedicated reviwers of workqueue.c. Now I add myself
to the MAINTAINERS entry with the updated email address.
The old la...@cn.fujitsu.com will be ended soon.
Signed-off-by: Lai Jiangshan jiangshan...@gmail.com
Signed-off-by: Lai Jiangshan la...@cn.fujitsu.com
---
MAINTAINERS | 1
Hi, TJ
The patch 4/5/6 does reduce cpu and temporary-memory usage sometimes.
But it is in slow path where small optimization is commonly unwelcome at.
Do I need to refactor the patches? I'm in doubt for the necessary.
Thanks,
Lai
--
To unsubscribe from this list: send the line unsubscribe
On Thu, Aug 13, 2015 at 12:03 AM, Paul E. McKenney
paul...@linux.vnet.ibm.com wrote:
On Wed, Aug 12, 2015 at 04:27:34PM +0200, Frederic Weisbecker wrote:
On Tue, Aug 11, 2015 at 08:42:58PM +0200, Luis R. Rodriguez wrote:
On Tue, Aug 11, 2015 at 10:49:36AM -0700, Andy Lutomirski wrote:
This
On Wed, Aug 12, 2015 at 1:49 AM, Andy Lutomirski l...@amacapital.net wrote:
This is a bit late, but here goes anyway.
Having played with the x86 context tracking hooks for awhile, I think
it would be nice if core code that needs to be aware of CPU context
(kernel, user, idle, guest, etc)
On Mon, Jul 13, 2015 at 5:57 PM, Peter Zijlstra pet...@infradead.org wrote:
On Fri, Jul 10, 2015 at 12:26:21PM -0500, Christoph Lameter wrote:
On Thu, 9 Jul 2015, Chris Mason wrote:
I think the topic is really interesting and we'll be able to get numbers
from production workloads to help
On Fri, Jul 10, 2015 at 3:09 AM, Chris Mason c...@fb.com wrote:
We've started experimenting with these to cut overheads in a few
critical places, and while we don't have numbers yet I really hope it
won't take too long.
I think the topic is really interesting and we'll be able to get
On Mon, Jul 13, 2015 at 5:57 PM, Peter Zijlstra pet...@infradead.org wrote:
On Fri, Jul 10, 2015 at 12:26:21PM -0500, Christoph Lameter wrote:
On Thu, 9 Jul 2015, Chris Mason wrote:
I think the topic is really interesting and we'll be able to get numbers
from production workloads to help
Hi, TJ
I think we need to add might_sleep() on the top of __cancel_work_timer().
The might_sleep() on the start_flush_work() doesn't cover all the
paths of __cancel_work_timer().
And it can help to narrow the area of this bug.
Hi Sedat Dilek
[ 24.705704] irq event stamp: 19968
[ 24.705706]
u.desnoy...@efficios.com>
> CC: "Paul E. McKenney" <paul...@linux.vnet.ibm.com>
> CC: Josh Triplett <j...@joshtriplett.org>
> CC: Steven Rostedt <rost...@goodmis.org>
> CC: Lai Jiangshan <jiangshan...@gmail.com>
> CC: <sta...@vger.kernel.org
eted) & 0x1;
> - __this_cpu_inc(sp->per_cpu_ref->c[idx]);
> + __this_cpu_inc(sp->per_cpu_ref->lock_count[idx]);
> smp_mb(); /* B */ /* Avoid leaking the critical section. */
> - __this_cpu_inc(sp->per_cpu_ref->seq[idx]);
> return idx;
> }
> EXPORT_SYMBOL_GPL(__srcu_read_lock);
> @@ -314,7 +285,7 @@ EXPORT_SYMBOL_GPL(__srcu_read_lock);
> void __srcu_read_unlock(struct srcu_struct *sp, int idx)
> {
> smp_mb(); /* C */ /* Avoid leaking the critical section. */
> - this_cpu_dec(sp->per_cpu_ref->c[idx]);
> + this_cpu_inc(sp->per_cpu_ref->unlock_count[idx]);
> }
> EXPORT_SYMBOL_GPL(__srcu_read_unlock);
>
> @@ -349,7 +320,7 @@ static bool try_check_zero(struct srcu_struct *sp, int
> idx, int trycount)
>
> /*
> * Increment the ->completed counter so that future SRCU readers will
> - * use the other rank of the ->c[] and ->seq[] arrays. This allows
> + * use the other rank of the ->(un)lock_count[] arrays. This allows
> * us to wait for pre-existing readers in a starvation-free manner.
> */
> static void srcu_flip(struct srcu_struct *sp)
>
Acked-by: Lai Jiangshan <jiangshan...@gmail.com>
On Thu, Nov 17, 2016 at 10:31 PM, Boqun Feng <boqun.f...@gmail.com> wrote:
> On Thu, Nov 17, 2016 at 08:18:51PM +0800, Lai Jiangshan wrote:
>> On Tue, Nov 15, 2016 at 10:37 PM, Paul E. McKenney
>> <paul...@linux.vnet.ibm.com> wrote:
>> > On Tue, Nov 15, 2016 a
On Thu, Nov 17, 2016 at 10:45 PM, Boqun Feng <boqun.f...@gmail.com> wrote:
> On Thu, Nov 17, 2016 at 06:38:29AM -0800, Paul E. McKenney wrote:
>> On Thu, Nov 17, 2016 at 05:49:57AM -0800, Paul E. McKenney wrote:
>> > On Thu, Nov 17, 2016 at 08:18:51PM +0800, Lai Jiangshan w
On Tue, Nov 15, 2016 at 10:37 PM, Paul E. McKenney
wrote:
> On Tue, Nov 15, 2016 at 09:44:45AM +0800, Boqun Feng wrote:
>>
>> __srcu_read_lock() used to be called with preemption disabled. I guess
>> the reason was because we have two percpu variables to increase. So
> +
> +/*
> + * No contention. Irq disable is only required.
> + */
> +static int same_context_plock(struct pend_lock *plock)
> +{
> + struct task_struct *curr = current;
> + int cpu = smp_processor_id();
> +
> + /* In the case of hardirq context */
> + if
On Mon, May 29, 2017 at 3:33 AM, Johannes Berg
wrote:
> Hi Tejun,
>
> I suspect this is a long-standing bug introduced by all the pool rework
> you did at some point, but I don't really know nor can I figure out how
> to fix it right now. I guess it could possibly also
On Wed, May 31, 2017 at 4:36 PM, Johannes Berg
wrote:
> Hi,
>
>> > #include
>> > #include
>> > #include
>> > #include
>> > #include
>> >
>> > DEFINE_MUTEX(mtx);
>> > static struct workqueue_struct *wq;
>> > static struct work_struct w1, w2;
>> >
>> > static void
On Mon, Oct 9, 2017 at 9:21 PM, Tejun Heo wrote:
> Josef reported a HARDIRQ-safe -> HARDIRQ-unsafe lock order detected by
> lockdep:
>
> [ 1270.472259] WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
> [ 1270.472783] 4.14.0-rc1-xfstests-12888-g76833e8 #110 Not
On Mon, Oct 9, 2017 at 11:08 PM, Tejun Heo <t...@kernel.org> wrote:
> Hello,
>
> On Mon, Oct 09, 2017 at 11:02:34PM +0800, Lai Jiangshan wrote:
>> I was also thinking alternative code when reviewing.
>> The first is quite obvious. Testing POOL_MANAGER_ACTIVE
>>
truction as
> suggested by Boqun.
>
> Signed-off-by: Tejun Heo <t...@kernel.org>
> Reported-by: Josef Bacik <jo...@toxicpanda.com>
> Cc: Peter Zijlstra <pet...@infradead.org>
> Cc: Boqun Feng <boqun.f...@gmail.com>
> Cc: sta...@vger.kernel.org
> ---
> kerne
Hello, all
An interesting (at least to me) thinking came up to me when I found
that the lguest was removed. But I don't have enough knowledge
to find out the answer nor energy to implement it in some time.
Is it possible to implement kvm-pv which allows kvm to run on
the boxes without hardware
On Sat, Sep 30, 2017 at 12:39 AM, Paolo Bonzini <pbonz...@redhat.com> wrote:
> On 29/09/2017 17:47, Lai Jiangshan wrote:
>> Hello, all
>>
>> An interesting (at least to me) thinking came up to me when I found
>> that the lguest was removed. But I don't hav
On Sun, Oct 8, 2017 at 5:02 PM, Boqun Feng wrote:
> Josef reported a HARDIRQ-safe -> HARDIRQ-unsafe lock order detected by
> lockdep:
>
> | [ 1270.472259] WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
> | [ 1270.472783] 4.14.0-rc1-xfstests-12888-g76833e8 #110
ot;smpcfd:dying" was missing before.
So was the invocation of the function smpcfd_dying_cpu().
Signed-off-by: Lai Jiangshan <jiangshan...@gmail.com>
CC: Richard Weinberger <rich...@nod.at>
cc: sta...@vger.kernel.org (v4.7+)
---
kernel/cpu.c | 10 +-
1 file changed, 5 insertion
On Thu, Sep 21, 2017 at 1:00 AM, Peter Zijlstra wrote:
> With lockdep-crossrelease we get deadlock reports that span cpu-up and
> cpu-down chains. Such deadlocks cannot possibly happen because cpu-up
> and cpu-down are globally serialized.
>
> CPU0 CPU1
Sine the cpu/hotplug refactor is done, the hotplug
callbacks are called properly. So the workaround is
useless.
Signed-off-by: Lai Jiangshan <jiangshan...@gmail.com>
---
kernel/workqueue.c | 10 --
1 file changed, 10 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workq
cpuhp_bp_states_ap_states have diffent set of steps
without any conflicting configed steps, so that they can
be merged.
The original `[CPUHP_BRINGUP_CPU] = { },` is removed, because
the new cpuhp_hp_states has CPUHP_ONLINE index which is larger
than CPUHP_BRINGUP_CPU.
Signed-off-by: Lai
Since the refactor for the cpu/hotplug is done,
workqueue_offline_cpu() is ensured to be run on the
local cpu which is going off.
Signed-off-by: Lai Jiangshan <jiangshan...@gmail.com>
---
kernel/workqueue.c | 15 ++-
1 file changed, 6 insertions(+), 9 deletions(-)
diff
On Sun, Dec 3, 2017 at 2:33 PM, Paul E. McKenney
<paul...@linux.vnet.ibm.com> wrote:
> On Fri, Dec 01, 2017 at 09:50:05PM +0800, Lai Jiangshan wrote:
>> cpuhp_bp_states_ap_states have diffent set of steps
>> without any conflicting configed steps, so that they can
>> be
aul E. McKenney <paul...@linux.vnet.ibm.com>
> Cc: Tejun Heo <t...@kernel.org>
> Cc: Lai Jiangshan <jiangshan...@gmail.com>
Reviewed-by: Lai Jiangshan <jiangshan...@gmail.com>
> ---
> kernel/workqueue.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion
but it doesn't hurt to
> change it and doing so avoids script-generated noise.
>
> Reported-by: Tobin C. Harding <m...@tobin.cc>
> Signed-off-by: Paul E. McKenney <paul...@linux.vnet.ibm.com>
Reviewed-by: Lai Jiangshan <jiangshan...@gmail.com>
> ---
> kernel/r
> |-current->current_pwq is NULL here!
> |-schedule()
>
>
> Avoid it by checking for task context in current_wq_worker(), and
> if not in task context, we shouldn't use the 'current' to check the
> condition.
>
> Re
On Thu, May 17, 2018 at 12:34 PM, Tejun Heo wrote:
> For historical reasons, the worker attach/detach functions don't
> currently manage worker->pool and the callers are manually and
> inconsistently updating it.
>
> This patch moves worker->pool updates into the worker
On Tue, Oct 24, 2017 at 9:18 AM, Li Bin wrote:
> When queue_work() is used in irq handler, there is a potential
> case that trigger NULL pointer dereference.
>
> worker_thread()
> |-spin_lock_irq()
>
On Wed, Jan 17, 2018 at 4:08 AM, Neeraj Upadhyay wrote:
>
>
> On 01/16/2018 11:05 PM, Tejun Heo wrote:
>>
>> Hello, Neeraj.
>>
>> On Mon, Jan 15, 2018 at 02:08:12PM +0530, Neeraj Upadhyay wrote:
>>>
>>> - kworker/0:0 gets chance to run on cpu1; while processing
>>>a
policy=1 prio=1 nice=0
> # cat /sys/devices/virtual/workqueue/system_percpu_highpri/sched_attr
> policy=0 prio=0 nice=-20
> # echo "policy=1 prio=2 nice=0" >
> /sys/devices/virtual/workqueue/system_percpu_highpri/s
On Mon, Jan 29, 2018 at 12:41 PM, Mike Galbraith <efa...@gmx.de> wrote:
> On Mon, 2018-01-29 at 12:15 +0800, Lai Jiangshan wrote:
>> I think adding priority boost to workqueue(flush_work()) is the best
>> way to fix the problem.
>
> I disagree, priority boosting is
On Tue, Jan 23, 2018 at 3:59 PM, wrote:
> From: Heng Zhang
>
> This RCU implementation (PRCU) is based on a fast consensus protocol
> published in the following paper:
>
> Fast Consensus Using Bounded Staleness for Scalable Read-mostly
>
On Fri, Jul 13, 2018 at 8:02 AM, Paul E. McKenney
wrote:
> Hello!
>
> I now have a semi-reasonable prototype of changes consolidating the
> RCU-bh, RCU-preempt, and RCU-sched update-side APIs in my -rcu tree.
> There are likely still bugs to be fixed and probably other issues as well,
> but a
On Wed, Mar 7, 2018 at 10:54 PM, Paul E. McKenney
<paul...@linux.vnet.ibm.com> wrote:
> On Wed, Mar 07, 2018 at 10:49:49AM +0800, Lai Jiangshan wrote:
>> On Wed, Mar 7, 2018 at 1:33 AM, Tejun Heo <t...@kernel.org> wrote:
>>
>> > +/**
>> > + * queue_r
On Wed, Mar 7, 2018 at 1:33 AM, Tejun Heo wrote:
> +/**
> + * queue_rcu_work_on - queue work on specific CPU after a RCU grace period
> + * @cpu: CPU number to execute work on
> + * @wq: workqueue to use
> + * @rwork: work to queue
For many people, "RCU grace period" is clear
n __queue_work() */
> + local_irq_disable();
> + __queue_work(WORK_CPU_UNBOUND, rwork->wq, >work);
> + local_irq_enable();
> +}
> +
> +/**
> + * queue_rcu_work - queue work after a RCU grace period
> + * @wq: workqueue to use
> + * @rwork: work to queue
On Tue, Mar 6, 2018 at 12:14 AM, Paul E. McKenney
wrote:
> On Mon, Mar 05, 2018 at 08:33:20AM -0600, Eric W. Biederman wrote:
>>
>> Moving this discussion to a public list as discussing how to reduce the
>> number of rcu variants does not make sense in private. We
The manager_arb mutex doesn't exist any more.
Signed-off-by: Lai Jiangshan <jiangshan...@gmail.com>
---
kernel/workqueue.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 19785a092026..698b6f1ecddd 100644
--- a/kernel/workqueue.c
+++ b/
On Tue, Mar 20, 2018 at 12:45 AM, Tejun Heo <t...@kernel.org> wrote:
> Hello, Lai.
>
> On Fri, Mar 16, 2018 at 02:01:35PM +0800, Lai Jiangshan wrote:
>> > +bool flush_rcu_work(struct rcu_work *rwork)
>> > +{
>> > + if (test_bit(WORK_STRU
Since the worker rebinding behavior was refactored, there is
no idle worker off the idle_list now. The comment is outdated
and can be just removed.
It also groups nr_workers and nr_idle together.
Signed-off-by: Lai Jiangshan <jiangshan...@gmail.com>
---
kernel/workqueue.c | 5 ++---
refcount of the pool in
manage_workers(). "indirect" means it gets a refcount of
the first involved pwq which holds a refcount of the pool.
This refcount can prevent the pool from being destroyed.
The original synchronization mechanism (wq_manager_wait)
is also removed.
Signed-off-by: Lai
On Thu, Sep 13, 2018 at 9:51 AM wrote:
>
> >> From: Liu Song
> >>
> >> Although the 'need_to_create_worker' has been determined to be
> >> true before entering the function. However, adjusting the order
> >> of judgment can combine two judgments in the loop. Also improve
> >> the matching
On 04/02/2013 02:44 AM, Tejun Heo wrote:
> On Sun, Mar 31, 2013 at 12:29:14AM +0800, Lai Jiangshan wrote:
>> freezing is nothing related to pools, but POOL_FREEZING adds a connection,
>> and causes freeze_workqueues_begin() and thaw_workqueues() complicated.
>>
>>
merge the code of clearing POOL_DISASSOCIATED to rebind_workers(), and
rename rebind_workers() to associate_cpu_pool().
It merges high related code together and simplify
workqueue_cpu_up_callback().
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 21 ++---
1 files
simplify pwq_adjust_max_active().
make freeze_workqueues_begin() and thaw_workqueues() fast skip non-freezable wq.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 13 ++---
1 files changed, 6 insertions(+), 7 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
If we have 4096 CPUs, workqueue_cpu_up_callback() will travel too much CPUs,
to avoid it, we use for_each_cpu_worker_pool() for the cpu pools and
use for_each_unbound_pool() for unbound pools.
After it, for_each_pool() becomes unused, but we keep it for future
possible usage.
Signed-off-by: Lai
freezing is nothing related to pools, but POOL_FREEZING adds a connection,
and causes freeze_workqueues_begin() and thaw_workqueues() complicated.
Since freezing is workqueue instance attribute, so we introduce __WQ_FREEZING
to wq->flags instead and remove POOL_FREEZING.
Signed-off-by:
8.003594] [] deactivate_locked_super+0x2f/0x56
[8.008077] [] deactivate_super+0x2e/0x31
[8.012523] [] mntput_no_expire+0x103/0x108
[8.017050] [] sys_umount+0x2a2/0x2c4
[8.021429] [] sys_oldumount+0x1e/0x20
[8.025678] [] sysenter_do_call+0x12/0x38
Signed-off-by: Lai Jiangshan
---
kernel/workqueue
calculate the node of the pool earlier, and allocate the pool
from the node.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 29 +++--
1 files changed, 15 insertions(+), 14 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 737646d..3f33077
When we fail to allocate the node pwq, we can use the default pwq
for the node.
Thus we can avoid failure after allocated default pwq, and remove
some code for failure path.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 28 +++-
1 files changed, 7 insertions
[resend in plain text mode (I did not notice the gmail changed the
default mode, sorry)]
On Fri, Apr 5, 2013 at 12:17 AM, Lai Jiangshan wrote:
> Hi, ALL
>
> I also encountered the same problem.
>
> git bisect:
>
> 14134f6584212d585b310ce95428014b653dfaf6 is the firs
On 03/19/2013 10:16 PM, Sebastian Andrzej Siewior wrote:
> DEFINE_SRCU() and DEFINE_STATIC_SRCU() does the same thing except for
> the "static" attribute. This patch moves the common pieces into
> _DEFINE_SRCU() which is used by the the former macros either adding the
> static attribute or not.
>
On 03/19/2013 10:16 PM, Sebastian Andrzej Siewior wrote:
> There are macros for static initializer for the three out of four
> possible notifier types, that are:
> ATOMIC_NOTIFIER_HEAD()
> BLOCKING_NOTIFIER_HEAD()
> RAW_NOTIFIER_HEAD()
>
> This patch provides a static
[Ping]
Hi, Eric Paris
Could you review this patch?
Thanks,
Lai
On 03/16/2013 12:50 AM, Lai Jiangshan wrote:
> fsnotify implements its own call_srcu() by:
> dedicated thread + synchronize_srcu()
>
> But srcu provides call_srcu() now, so we should convert them to use
> ex
On 04/04/2013 10:55 PM, Tejun Heo wrote:
>>From 5c529597e922c26910fe49b8d5f93aeaca9a2415 Mon Sep 17 00:00:00 2001
> From: Lai Jiangshan
> Date: Thu, 4 Apr 2013 10:05:38 +0800
>
> destroy_workqueue() performs several sanity checks before proceeding
> with destruction
isection mistake, but if so, then the LSB test-cases obviously have
> to be fixed, and the commit that causes the problem needs to be
> reverted. Test-cases count for nothing compared to actual users.
>
> Linus
>
> On Thu, Apr 4, 2013 at 9:17 AM, Lai Jiangshan wrote:
On 04/08/2013 06:03 PM, Sebastian Andrzej Siewior wrote:
> On 04/05/2013 09:21 AM, Lai Jiangshan wrote:
>> Hi, Sebastian
>
> Hi Lai,
>
>> I don't want to expose __SRCU_STRUCT_INIT(),
>> due to it has strong coupling with the percpu array.
>>
>> I hope
On 07/16/2013 10:41 PM, Srivatsa S. Bhat wrote:
> Hi,
>
> I have been seeing this warning every time during boot. I haven't
> spent time digging through it though... Please let me know if
> any machine-specific info is needed.
>
> Regards,
> Srivatsa S. Bhat
>
>
>
On 08/20/2013 10:42 AM, Paul E. McKenney wrote:
> From: "Paul E. McKenney"
>
> This commit drops an unneeded ACCESS_ONCE() and simplifies an "our work
> is done" check in _rcu_barrier(). This applies feedback from Linus
> (https://lkml.org/lkml/2013/7/26/777) that he gave to similar code
> in
On 08/20/2013 10:42 AM, Paul E. McKenney wrote:
> From: Borislav Petkov
>
> CONFIG_RCU_FAST_NO_HZ can increase grace-period durations by up to
> a factor of four, which can result in long suspend and resume times.
> Thus, this commit temporarily switches to expedited grace periods when
>
On 08/20/2013 10:51 AM, Paul E. McKenney wrote:
> From: "Paul E. McKenney"
>
> This commit adds a object_debug option to rcutorture to allow the
> debug-object-based checks for duplicate call_rcu() invocations to
> be deterministically tested.
>
> Signed-off-by: Paul E. McKenney
> Cc: Mathieu
On 08/21/2013 02:38 AM, Paul E. McKenney wrote:
> On Tue, Aug 20, 2013 at 06:02:39PM +0800, Lai Jiangshan wrote:
>> On 08/20/2013 10:51 AM, Paul E. McKenney wrote:
>>> From: "Paul E. McKenney"
>>>
>>> This commit adds a object_debug option to rcutort
On 08/21/2013 11:17 AM, Paul E. McKenney wrote:
> On Sat, Aug 10, 2013 at 08:07:15AM -0700, Paul E. McKenney wrote:
>> On Sat, Aug 10, 2013 at 11:43:59AM +0800, Lai Jiangshan wrote:
>
> [ . . . ]
>
>>> So I have to narrow the range of suspect locks. Two choices:
>&g
On 07/27/2013 07:19 AM, Paul E. McKenney wrote:
> From: "Paul E. McKenney"
>
> At least one CPU must keep the scheduling-clock tick running for
> timekeeping purposes whenever there is a non-idle CPU. However, with
> the new nohz_full adaptive-idle machinery, it is difficult to distinguish
>
901 - 1000 of 2229 matches
Mail list logo