Reported-by: Yasuaki Ishimatsu
Cc: Tejun Heo
Cc: Yasuaki Ishimatsu
Cc: "Gu, Zheng"
Cc: tangchen
Cc: Hiroyuki KAMEZAWA
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 76 +-
1 file changed, 53 insertions(+), 23 deletion
On 01/13/2015 11:22 PM, Tejun Heo wrote:
> Hello,
>
> On Tue, Jan 13, 2015 at 03:19:09PM +0800, Lai Jiangshan wrote:
>> The Mapping of the *online* cpus to nodes is already maintained by numa code.
>>
>> What the workqueue needs is a special Mapping:
>> The
In rcu_gp_init(), rnp->completed equals to rsp->completed in THEORY,
we don't need to touch it normally. If something goes wrong,
it will complain and fixup rnp->completed and avoid oops.
Signed-off-by: Lai Jiangshan
---
kernel/rcu/tree.c | 4 ++--
1 file changed, 2 insertions(+
On 12/26/2014 04:11 AM, Tejun Heo wrote:
> On Wed, Dec 17, 2014 at 01:56:29PM +0900, Kamezawa Hiroyuki wrote:
>> Let me correct my words. Main purpose of this patch 1/2 is handling a case
>> "node disappers" after boot.
>> And try to handle physicall node hotplug caes.
>>
>> Changes of cpu<->node
On 12/26/2014 04:14 AM, Tejun Heo wrote:
> On Mon, Dec 15, 2014 at 09:23:49AM +0800, Lai Jiangshan wrote:
>> The pwqs of the old node's cpumask do be discarded. But the pools of the old
>> node's cpumask maybe recycle. For example, a new workqueue's affinity is set
&
The 48a7639ce80c ("rcu: Make callers awaken grace-period kthread")
removed the irq_work_queue(), so the TREE_RCU doesn't need
irq work any more.
Signed-off-by: Lai Jiangshan
---
init/Kconfig |2 --
kernel/rcu/tree.h |1 -
2 files changed, 0 insertions(+), 3 deletions
On 12/12/2014 06:19 PM, Lai Jiangshan wrote:
> Yasuaki Ishimatsu hit a allocation failure bug when the numa mapping
> between CPU and node is changed. This was the last scene:
> SLUB: Unable to allocate memory on node 2 (gfp=0x80d0)
> cache: kmalloc-192, object size: 192, buff
On 12/17/2014 12:45 AM, Kamezawa Hiroyuki wrote:
> With node online/offline, cpu<->node relationship is established.
> Workqueue uses a info which was established at boot time but
> it may be changed by node hotpluging.
>
> Once pool->node points to a stale node, following allocation failure
> hap
On 12/16/2014 03:32 PM, Kamezawa Hiroyuki wrote:
> (2014/12/16 14:30), Lai Jiangshan wrote:
>> On 12/15/2014 07:14 PM, Kamezawa Hiroyuki wrote:
>>> Unbound wq pool's node attribute is calculated at its allocation.
>>> But it's now calculated based on possible
On 12/15/2014 07:18 PM, Kamezawa Hiroyuki wrote:
> Workqueue keeps cpu<->node relationship including all possible cpus.
> The original information was made at boot but it may change when
> a new node is added.
>
> Update information if a new node is ready with using node-hotplug callback.
>
> Sig
On 12/15/2014 07:16 PM, Kamezawa Hiroyuki wrote:
> The percpu workqueue pool are persistend and never be freed.
> But cpu<->node relationship can be changed by cpu hotplug and pool->node
> can point to an offlined node.
>
> If pool->node points to an offlined node,
> following allocation failure c
On 12/15/2014 07:14 PM, Kamezawa Hiroyuki wrote:
> Unbound wq pool's node attribute is calculated at its allocation.
> But it's now calculated based on possible cpu<->node information
> which can be wrong after cpu hotplug/unplug.
>
> If wrong pool->node is set, following allocation error will hap
On 12/13/2014 01:12 AM, Tejun Heo wrote:
> On Fri, Dec 12, 2014 at 06:19:51PM +0800, Lai Jiangshan wrote:
>> wq_numa_init() will quit directly on some bonkers cases without freeing the
>> memory. Add the missing cleanup code.
>>
>> Cc: Tejun Heo
>> Cc: Yas
On 12/15/2014 12:04 PM, Kamezawa Hiroyuki wrote:
> (2014/12/15 12:34), Lai Jiangshan wrote:
>> On 12/15/2014 10:55 AM, Kamezawa Hiroyuki wrote:
>>> (2014/12/15 11:48), Lai Jiangshan wrote:
>>>> On 12/15/2014 10:20 AM, Kamezawa Hiroyuki wrote:
>>>>
On 12/15/2014 10:55 AM, Kamezawa Hiroyuki wrote:
> (2014/12/15 11:48), Lai Jiangshan wrote:
>> On 12/15/2014 10:20 AM, Kamezawa Hiroyuki wrote:
>>> (2014/12/15 11:12), Lai Jiangshan wrote:
>>>> On 12/14/2014 12:38 AM, Kamezawa Hiroyuki wrote:
>>>>> Al
On 12/15/2014 10:55 AM, Kamezawa Hiroyuki wrote:
> (2014/12/15 11:48), Lai Jiangshan wrote:
>> On 12/15/2014 10:20 AM, Kamezawa Hiroyuki wrote:
>>> (2014/12/15 11:12), Lai Jiangshan wrote:
>>>> On 12/14/2014 12:38 AM, Kamezawa Hiroyuki wrote:
>>>>> Al
On 12/15/2014 10:20 AM, Kamezawa Hiroyuki wrote:
> (2014/12/15 11:12), Lai Jiangshan wrote:
>> On 12/14/2014 12:38 AM, Kamezawa Hiroyuki wrote:
>>> Although workqueue detects relationship between cpu<->node at boot,
>>> it is finally determined in cpu_up().
&g
On 12/14/2014 12:38 AM, Kamezawa Hiroyuki wrote:
> Although workqueue detects relationship between cpu<->node at boot,
> it is finally determined in cpu_up().
> This patch tries to update pool->node using online status of cpus.
>
> 1. When a node goes down, clear per-cpu pool's node attr.
> 2. Whe
On 12/14/2014 12:35 AM, Kamezawa Hiroyuki wrote:
> remove node aware unbound pools if node goes offline.
>
> scan unbound workqueue and remove numa affine pool when
> a node goes offline.
>
> Signed-off-by: KAMEZAWA Hiroyuki
> ---
> kernel/workqueue.c | 29 +
> 1 fil
On 12/13/2014 01:18 AM, Tejun Heo wrote:
> On Fri, Dec 12, 2014 at 06:19:52PM +0800, Lai Jiangshan wrote:
> ...
>> +static void wq_update_numa_mapping(int cpu)
>> +{
>> +int node, orig_node = NUMA_NO_NODE, new_node = cpu_to_node(cpu);
>> +
>> +
smp_send_reschedule+0x5d/0x60
> [ 890.156187] [] resched_curr+0xa8/0xd0
> [ 890.156187] [] check_preempt_curr+0x80/0xa0
> [ 890.156187] [] attach_task+0x48/0x50
> [ 890.156187] [] active_load_balance_cpu_stop+0x105/0x250
> [ 890.156187] [] ? set_next_entity+0x80/0x80
> [ 89
On 12/13/2014 01:27 AM, Tejun Heo wrote:
> On Fri, Dec 12, 2014 at 06:19:54PM +0800, Lai Jiangshan wrote:
>> We fixed the major cases when the numa mapping is changed.
>>
>> We still have the assumption that when the node<->cpu mapping is changed
>> the original
On 12/13/2014 01:25 AM, Tejun Heo wrote:
> On Fri, Dec 12, 2014 at 06:19:53PM +0800, Lai Jiangshan wrote:
>> Yasuaki Ishimatsu hit a bug when the numa mapping between CPU and node
>> is changed. And the previous path fixup wq_numa_possible_cpumask.
>> (See more information
new pool->node of new pools are correct.
and existing wq's affinity is fixed up by wq_update_unbound_numa()
after wq_update_numa_mapping().
Reported-by: Yasuaki Ishimatsu
Cc: Tejun Heo
Cc: Yasuaki Ishimatsu
Cc: "Gu, Zheng"
Cc: tangchen
Cc: Hiroyuki KAMEZAWA
Signed-off-by:
wq_numa_init() will quit directly on some bonkers cases without freeing the
memory. Add the missing cleanup code.
Cc: Tejun Heo
Cc: Yasuaki Ishimatsu
Cc: "Gu, Zheng"
Cc: tangchen
Cc: Hiroyuki KAMEZAWA
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c |3 +++
1 files
Cc: "Gu, Zheng"
Cc: tangchen
Cc: Hiroyuki KAMEZAWA
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 53 ++-
1 files changed, 39 insertions(+), 14 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 4c88b
c: tangchen
Cc: Hiroyuki KAMEZAWA
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 16 +++-
1 files changed, 11 insertions(+), 5 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 29a96c3..9e35a79 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1
et is untested. It is sent for earlier review.
Thanks,
Lai.
Reported-by: Yasuaki Ishimatsu
Cc: Tejun Heo
Cc: Yasuaki Ishimatsu
Cc: "Gu, Zheng"
Cc: tangchen
Cc: Hiroyuki KAMEZAWA
Lai Jiangshan (5):
workqueue: fix memory leak in wq_numa_init()
workqueue: update wq_numa_poss
update the affinity in this case.
Cc: Tejun Heo
Cc: Yasuaki Ishimatsu
Cc: "Gu, Zheng"
Cc: tangchen
Cc: Hiroyuki KAMEZAWA
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 15 +++
1 files changed, 15 insertions(+), 0 deletions(-)
diff --git a/kernel/workqueue.c b/ke
:
size kernel/rcu/tiny-old.o kernel/rcu/tiny-patched.o
textdata bss dec hex filename
3449 206 83663 e4f kernel/rcu/tiny-old.o
2406 144 82558 9fe kernel/rcu/tiny-patched.o
Signed-off-by: Lai Jiangshan
---
kernel/rcu/rcu.h |6
On 11/28/2014 11:42 PM, Paul E. McKenney wrote:
> On Fri, Nov 28, 2014 at 05:43:31PM +0800, Lai Jiangshan wrote:
>> Hi, Paul
>>
>> These two patches use the special feture of the UP system:
>> In UP, quiescent state == grace period.
>>
>> For rcu_bh
On 12/08/2014 09:54 PM, Steven Rostedt wrote:
> On Mon, 8 Dec 2014 14:27:01 +1100
> Anton Blanchard wrote:
>
>> I have a busy ppc64le KVM box where guests sometimes hit the infamous
>> "kernel BUG at kernel/smpboot.c:134!" issue during boot:
>>
>> BUG_ON(td->cpu != smp_processor_id());
>>
>> Bas
ch causes no functional difference.
>
> Signed-off-by: Tejun Heo
> ---
Reviewed-by: Lai Jiangshan
> Applying to wq/for-3.19.
>
> Thanks.
>
> kernel/workqueue.c |2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> --- a/kernel/wo
er turns out to complicate things with the planned
> rescuer_thread() update. Let's invert them. This doesn't cause any
> behavior differences.
>
> Signed-off-by: Tejun Heo
> Cc: NeilBrown
> Cc: Dongsu Park
> Cc: Lai Jiangshan
Reviewed-by: Lai Jiangshan
>
On 12/05/2014 08:11 AM, Paul E. McKenney wrote:
> On Thu, Dec 04, 2014 at 06:50:24PM -0500, Pranith Kumar wrote:
>> SRCU is not necessary to be compiled by default in all cases. For
>> tinification
>> efforts not compiling SRCU unless necessary is desirable.
>>
>> The current patch tries to make c
On 12/04/2014 02:02 AM, Tejun Heo wrote:
> So, something like the following. Only compile tested. I'll test it
> and post proper patches w/ due credits.
>
> Thanks.
>
> Index: work/kernel/workqueue.c
> ===
> --- work.orig/kernel/wo
On 12/03/2014 12:58 AM, Dâniel Fraga wrote:
> On Tue, 2 Dec 2014 16:40:37 +0800
> Lai Jiangshan wrote:
>
>> It is needed at lest for testing.
>>
>> CONFIG_TREE_PREEMPT_RCU=y with CONFIG_PREEMPT=n is needed for testing too.
>>
>> Please enable them (
On 12/02/2014 03:14 AM, Paul E. McKenney wrote:
> On Sun, Nov 30, 2014 at 11:02:43PM -0200, Dâniel Fraga wrote:
>> On Sun, 30 Nov 2014 16:21:19 -0800
>> Linus Torvalds wrote:
>>
>>> Maybe you'll have to turn off RCU_CPU_STALL_VERBOSE first.
>>>
>>> Although I think you should be able to just edit
directly in call_rcu_bh().
Signed-off-by: Lai Jiangshan
---
kernel/rcu/tiny.c | 38 +-
1 files changed, 17 insertions(+), 21 deletions(-)
diff --git a/kernel/rcu/tiny.c b/kernel/rcu/tiny.c
index 805b6d5..f8e19ac 100644
--- a/kernel/rcu/tiny.c
+++ b/kernel/rcu
()rcu_sched_qs()
QS,and GP and advance cb QS,and GP and advance cb
wake up the ksoftirqd wake up the ksoftirqd
set resched
resched to ksoftirqd (or other) resched to ksoftirqd (or other)
These two code patches are almost the same.
Signed-off-by: Lai Jiangshan
can change rcu_bh_qs() rcu_idle/irq_enter/exit() to static-inline-functions
to reduce the binary size after these two patches accepted.
Thanks,
Lai
Cc: Josh Triplett
Cc: Steven Rostedt
Cc: Mathieu Desnoyers
Lai Jiangshan (2):
record rcu_bh quiescent state in RCU_SOFTIRQ
tiny_rcu: resched
>
> Signed-off-by: Paul E. McKenney
>
Reviewed-by: Lai Jiangshan
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index 8749f43f3f05..fc0236992655 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -759,39 +759,71 @@ void rcu_irq_enter(void)
&g
On 11/18/2014 07:55 PM, Tejun Heo wrote:
> Hello,
>
> On Tue, Nov 18, 2014 at 05:19:18PM +0800, Lai Jiangshan wrote:
>> Is it too ugly?
>
> What is "it"? The whole thing? percpu preloading? I'm just gonna
> continue assuming that you're talking
On 11/14/2014 06:09 AM, Tejun Heo wrote:
> Implement set of pointers. Pointers can be added, deleted and
> iterated. It's currently implemented as a thin rbtree wrapper making
> addition and removal O(log N). A drawback is that iteration isn't RCU
> safe, which is okay for now. This will be use
*/
complete(&rnp->boost_completion);
}
Just revert the patch to avoid it.
Cc: Thomas Gleixner
Cc: Steven Rostedt
Cc: Peter Zijlstra
Signed-off-by: Lai Jiangshan
---
kernel/rcu/tree.h|5 -
kernel/rcu/tree_plugin.h |8 +---
2 files changed, 1 insertions(+), 1
On 11/18/2014 12:27 PM, NeilBrown wrote:
>
> When there is serious memory pressure, all workers in a pool could be
> blocked, and a new thread cannot be created because it requires memory
> allocation.
>
> In this situation a WQ_MEM_RECLAIM workqueue will wake up the
> rescuer thread to do some w
worker" to "need_to_create_worker" ?
>>> Then it will stop as soon as there in an idle worker thread.
>>> That is the condition that keeps maybe_create_worker() looping.
>>> ??
>>
>> Yeah, that'd be a better condition and can work out. C
On 10/29/2014 10:38 PM, Tejun Heo wrote:
> On Wed, Oct 29, 2014 at 05:26:34PM +0800, pang.xun...@zte.com.cn wrote:
>> The memset in ida_init() already handles idr, so there's some
>> redundancy in the following idr_init().
>>
>> This patch removes the memset, and clears ida->free_bitmap instead.
>>
ping
On 10/08/2014 11:53 AM, Lai Jiangshan wrote:
> Hi, TJ
>
> These patches are for unbound workqueue management (hotplug).
>
> This patchset simplify the unbound workqueue management when hotplug.
> This is also a preparation patchset for later unbound workqueue ma
On 10/23/2014 07:03 PM, Peter Zijlstra wrote:
> On Thu, Oct 23, 2014 at 06:14:45PM +0800, Lai Jiangshan wrote:
>>
>>>
>>> +struct vm_area_struct *find_vma_srcu(struct mm_struct *mm, unsigned long
>>> addr)
>>> +{
>>> +
On 10/22/2014 01:56 AM, Peter Zijlstra wrote:
> On Tue, Oct 21, 2014 at 08:09:48PM +0300, Kirill A. Shutemov wrote:
>> It would be interesting to see if the patchset affects non-condended case.
>> Like a one-threaded workload.
>
> It does, and not in a good way, I'll have to look at that... :/
Ma
>
> +struct vm_area_struct *find_vma_srcu(struct mm_struct *mm, unsigned long
> addr)
> +{
> + struct vm_area_struct *vma;
> + unsigned int seq;
> +
> + WARN_ON_ONCE(!srcu_read_lock_held(&vma_srcu));
> +
> + do {
> + seq = read_seqbegin(&mm->mm_seq);
> +
for this reason, and it will be remove
in later patch.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 7a217f0..9bc3a87 100644
--- a/kernel/workqueue.c
+++ b/kernel
Hi, TJ
These patches are for unbound workqueue management (hotplug).
This patchset simplify the unbound workqueue management when hotplug.
This is also a preparation patchset for later unbound workqueue management
patches.
Thanks,
Lai.
Lai Jiangshan (3):
workqueue: add
-allocation and installation are changed to be protected by
wq_pool_mutex. Now the get_online_cpus() is no reason to exist,
remove it!
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 15 ++-
1 files changed, 2 insertions(+), 13 deletions(-)
diff --git a/kernel/workqueue.c b
in the cpu-hotplug callbacks and wq_calc_node_cpumask()
can use it instead of cpumask_of_node(node). Thus wq_calc_node_cpumask()
becomes much simpler and @cpu_going_down is gone.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 42 --
1 files c
On 09/23/2014 10:38 PM, Tejun Heo wrote:
> On Mon, Sep 22, 2014 at 04:04:37PM +0800, Lai Jiangshan wrote:
>> It seems incomplete if the pool_ids file doesn't include the default
>> pwq's pool. Add it and the result:
>>
>> # cat pool_ids
>> 0:9 1:10
&
ded,
so it would be better to remove it IMO.
Signed-off-by: Lai Jiangshan
---
include/linux/prio_heap.h | 58 -
lib/Makefile |2 +-
lib/prio_heap.c | 70 -
3 files changed, 1 insertio
It seems incomplete if the pool_ids file doesn't include the default
pwq's pool. Add it and the result:
# cat pool_ids
0:9 1:10
default:8
rcu_read_lock_sched() is also changed to mutex_lock(&wq->mutex)
for accessing the default pwq.
Signed-off-by: Lai Jiangshan
---
kernel/w
The original code are the same as RB_DECLARE_CALLBACKS().
CC: Michel Lespinasse
Signed-off-by: Lai Jiangshan
---
drivers/block/drbd/drbd_interval.c | 36 ++--
1 files changed, 2 insertions(+), 34 deletions(-)
diff --git a/drivers/block/drbd/drbd_interval.c
b
bd_insert_interval() may cancel the insertion when traveling,
in this case, the just added augment-code does nothing before cancel
since the @this node is already in the subtrees in this case.
CC: Michel Lespinasse
Signed-off-by: Lai Jiangshan
---
drivers/block/drbd/drbd_interval.c |4 +
The comment is copied from Documentation/rbtree.txt, but this comment
is so important that it should also be in the code.
CC: Andrew Morton
CC: Michel Lespinasse
Signed-off-by: Lai Jiangshan
---
include/linux/rbtree_augmented.h | 10 ++
1 files changed, 10 insertions(+), 0 deletions
Commit-ID: 5cd038f53ed9ec7a17ab7d536a727363080f4210
Gitweb: http://git.kernel.org/tip/5cd038f53ed9ec7a17ab7d536a727363080f4210
Author: Lai Jiangshan
AuthorDate: Wed, 4 Jun 2014 16:25:15 +0800
Committer: Ingo Molnar
CommitDate: Tue, 9 Sep 2014 06:47:27 +0200
sched: Migrate waking tasks
On 09/03/2014 11:15 PM, Peter Zijlstra wrote:
> On Mon, Sep 01, 2014 at 11:04:23AM +0800, Lai Jiangshan wrote:
>> Hi, Peter
>>
>> Could you make a patch for it, please? Jason J. Herne's test showed we
>> addressed the bug. But the fix is not in kernel yet. Some n
following tags in your patch:
Reported-by: Sasha Levin
Reported-by: Jason J. Herne
Tested-by: Jason J. Herne
Acked-by: Lai Jiangshan
Thanks,
Lai
On 06/06/2014 09:36 PM, Peter Zijlstra wrote:
> On Thu, Jun 05, 2014 at 06:54:35PM +0800, Lai Jiangshan wrote:
>> diff --git a/kernel/sc
On 08/06/2014 05:55 AM, Paul E. McKenney wrote:
> On Tue, Aug 05, 2014 at 08:47:55AM +0800, Lai Jiangshan wrote:
>> On 08/04/2014 10:56 PM, Peter Zijlstra wrote:
>>> On Mon, Aug 04, 2014 at 02:25:15PM +0200, Peter Zijlstra wrote:
>>>> On Mon, Aug 04, 2014 at 04:5
On 08/06/2014 05:55 AM, Paul E. McKenney wrote:
> On Tue, Aug 05, 2014 at 08:47:55AM +0800, Lai Jiangshan wrote:
>> On 08/04/2014 10:56 PM, Peter Zijlstra wrote:
>>> On Mon, Aug 04, 2014 at 02:25:15PM +0200, Peter Zijlstra wrote:
>>>> On Mon, Aug 04, 2014 at 04:5
I don't think this one needs nested sleeps.
diff --git a/fs/notify/inotify/inotify_user.c b/fs/notify/inotify/inotify_user.c
index cc423a3..1ca5888 100644
--- a/fs/notify/inotify/inotify_user.c
+++ b/fs/notify/inotify/inotify_user.c
@@ -233,15 +233,16 @@ static ssize_t inotify_read(struct file *fi
0)
>
> --------
> Christoph Lameter (1):
> percpu: Use ALIGN macro instead of hand coding alignment calculation
>
> Lai Jiangshan (2):
> workqueue: clear POOL_DISASSOCIATED in rebind_workers()
>
On 08/04/2014 10:56 PM, Peter Zijlstra wrote:
> On Mon, Aug 04, 2014 at 02:25:15PM +0200, Peter Zijlstra wrote:
>> On Mon, Aug 04, 2014 at 04:50:44AM -0700, Paul E. McKenney wrote:
>>> OK, I will bite...
>>>
>>> What kinds of tasks are on a runqueue, but neither ->on_cpu nor
>>> PREEMPT_ACTIVE?
>>
On 08/04/2014 03:46 PM, Peter Zijlstra wrote:
> On Mon, Aug 04, 2014 at 09:28:45AM +0800, Lai Jiangshan wrote:
>> On 08/01/2014 05:55 AM, Paul E. McKenney wrote:
>>> + rcu_read_lock();
>>> + for_each_process_thread(g, t) {
>>> +
On 08/02/2014 05:54 AM, David Rientjes wrote:
> On Thu, 31 Jul 2014, Lai Jiangshan wrote:
>
>> If the smpboot_register_percpu_thread() is called after
>> smpboot_create_threads()
>> but before __cpu_up(), the smpboot thread of the online-ing CPU is not
>> created
On 08/01/2014 05:55 AM, Paul E. McKenney wrote:
> + rcu_read_lock();
> + for_each_process_thread(g, t) {
> + if (t != current && ACCESS_ONCE(t->on_rq) &&
> + !is_idle_task(t)) {
> + get_task_struct(t);
>
On 08/04/2014 06:05 AM, Paul E. McKenney wrote:
> On Sun, Aug 03, 2014 at 03:33:18PM +0200, Oleg Nesterov wrote:
>> On 08/02, Paul E. McKenney wrote:
>>>
>>> On Sat, Aug 02, 2014 at 04:56:16PM +0200, Oleg Nesterov wrote:
On 07/31, Paul E. McKenney wrote:
>
> + rcu_read_lock();
On 08/01/2014 12:09 AM, Chris Metcalf wrote:
> On 7/31/2014 7:51 AM, Michal Hocko wrote:
>> On Thu 31-07-14 11:30:19, Lai Jiangshan wrote:
>>> It is suggested that cpumask_var_t and alloc_cpumask_var() should be used
>>> instead of struct cpumask. But I don't
On 08/01/2014 05:55 AM, Paul E. McKenney wrote:
> From: "Paul E. McKenney"
>
> This commit adds a new RCU-tasks flavor of RCU, which provides
> call_rcu_tasks(). This RCU flavor's quiescent states are voluntary
> context switch (not preemption!), userspace execution, and the idle loop.
> Note th
On 08/01/2014 05:55 AM, Paul E. McKenney wrote:
> From: "Paul E. McKenney"
>
> This commit adds a new RCU-tasks flavor of RCU, which provides
> call_rcu_tasks(). This RCU flavor's quiescent states are voluntary
> context switch (not preemption!), userspace execution, and the idle loop.
> Note th
On 08/01/2014 12:09 AM, Paul E. McKenney wrote:
>
>>> + /*
>>> +* There were callbacks, so we need to wait for an
>>> +* RCU-tasks grace period. Start off by scanning
>>> +* the task list for tasks that are not already
>>> +* voluntarily
On 07/30/2014 10:55 PM, Christoph Lameter wrote:
> On Wed, 30 Jul 2014, Fengguang Wu wrote:
>
>> FYI, this commit seems to convert some kernel boot hang bug into
>> different BUG messages.
>
> Hmmm. Still a bit confused as to why these messages occur.. Does this
> patch do any good?
The vmstat b
On 07/30/2014 09:56 PM, Fengguang Wu wrote:
> Hi Christoph,
>
> FYI, this commit seems to convert some kernel boot hang bug into
> different BUG messages.
>
> git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu.git
> for-3.17-consistent-ops
> commit 9b0c63851edaf54e909475fe2a0946f57810e98a
>
On 07/31/2014 08:39 AM, Paul E. McKenney wrote:
> From: "Paul E. McKenney"
>
> This commit adds a new RCU-tasks flavor of RCU, which provides
> call_rcu_tasks(). This RCU flavor's quiescent states are voluntary
> context switch (not preemption!), userspace execution, and the idle loop.
> Note th
smpboot.h doesn't need this declaration, remove it.
CC: Thomas Gleixner
Signed-off-by: Lai Jiangshan
---
include/linux/smpboot.h |2 --
1 files changed, 0 insertions(+), 2 deletions(-)
diff --git a/include/linux/smpboot.h b/include/linux/smpboot.h
index 13e9296..d37dc78 100644
rk. flush_work()
performs very quickly on initialized but unused work item, thus we don't
need the struct cpumask has_work for performance.
CC: a...@linux-foundation.org
CC: Chris Metcalf
CC: Mel Gorman
CC: Tejun Heo
CC: Christoph Lameter
CC: Frederic Weisbecker
Signed-off-by: Lai Jiangshan
-
doesn't need
get_online_cpus() which is removed in the patch.
CC: Thomas Gleixner
Cc: Rusty Russell
Cc: Peter Zijlstra
Cc: Srivatsa S. Bhat
CC: sta...@kernel.org
Signed-off-by: Lai Jiangshan
---
kernel/smpboot.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/k
On 07/30/2014 10:45 PM, Christoph Lameter wrote:
> On Wed, 30 Jul 2014, Lai Jiangshan wrote:
>
>> I think the bug is here, it re-queues the per_cpu(vmstat_work, cpu) which is
>> offline
>> (after vmstat_cpuup_callback(CPU_DOWN_PREPARE). And cpu_stat_off is
>> a
On 07/29/2014 06:56 AM, Paul E. McKenney wrote:
> + /*
> + * Each pass through the following loop scans the list
> + * of holdout tasks, removing any that are no longer
> + * holdouts. When the list is empty, we are done.
> + */
> +
On 07/30/2014 11:23 AM, Tejun Heo wrote:
> Hello, Lai.
>
> On Wed, Jul 30, 2014 at 08:32:51AM +0800, Lai Jiangshan wrote:
>>> Why? Just sleep and retry? What's the point of requeueing?
>>
>> Accepted your comments except this one which may need to discuss
>
On 07/29/2014 11:39 PM, Christoph Lameter wrote:
> On Tue, 29 Jul 2014, Tejun Heo wrote:
>
>> Hmmm, well, then it's something else. Either a bug in workqueue or in
>> the caller. Given the track record, the latter is more likely.
>> e.g. it looks kinda suspicious that the work func is cleared af
On 07/11/2014 11:17 PM, Christoph Lameter wrote:
> On Fri, 11 Jul 2014, Frederic Weisbecker wrote:
>
>>> Converted what? We still need to keep a cpumask around that tells us which
>>> processor have vmstat running and which do not.
>>>
>>
>> Converted to cpumask_var_t.
>>
>> I mean we spent dozens
If I understand the semantics of the cpu_stat_off correctly, please read.
cpu_stat_off = a set of such CPU: the cpu is online && vmstat_work is off
I consider some code forget to guarantee each cpu in cpu_stat_off is online.
Thanks,
Lai
On 07/10/2014 10:04 PM, Christoph Lameter wrote:
> +
> +/*
On 07/29/2014 11:04 PM, Tejun Heo wrote:
> Hello,
>
> On Tue, Jul 29, 2014 at 05:16:07PM +0800, Lai Jiangshan wrote:
>
> First of all, the patch is too big. This is a rather pervasive
> change. Please split it up if at all possible.
>
>> +/* Start the mayday timer
creation activity (creater_work, mayday_timer) at first and then
stops idle workers and idle_timer.
(1/2 patch is the 1/3 patch of the v1, so it is not resent.)
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 238 ++--
1 files changed, 63
On 07/29/2014 02:55 AM, Tejun Heo wrote:
> Hello, Lai.
>
> On Sat, Jul 26, 2014 at 11:04:50AM +0800, Lai Jiangshan wrote:
>> There are some problems with the managers:
>> 1) The last idle worker prefer managing to processing.
>> It is better that the processing
If the worker task is not idle, it may sleep on some conditions by the request
of the work. Our unfriendly wakeup in the insert_kthread_work() may confuse
the worker.
Signed-off-by: Lai Jiangshan
---
kernel/kthread.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a
running state of the kthread_worker
and calls cancel_kthread_work() to cancel the possible requeued work.
Both cancel_kthread_work_sync() and cancel_kthread_work() share the
code of flush_kthread_work() which also make the implementation simpler.
Signed-off-by: Lai Jiangshan
---
include/linux
The wait_queue_head_t done was totally unused since the flush_kthread_work()
had been re-implemented. So we removed it including the initialization
code. Some LOCKDEP code also depends on this wait_queue_head, so the
LOCKDEP code is also cleanup.
Signed-off-by: Lai Jiangshan
---
include/linux
() is also removed along with it.
Signed-off-by: Lai Jiangshan
---
kernel/workqueue.c | 37 +
1 files changed, 13 insertions(+), 24 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index ce8e3fc..e1ab4f9 100644
--- a/kernel/workque
: Lai Jiangshan
---
kernel/workqueue.c |7 ++-
1 files changed, 6 insertions(+), 1 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 370f947..1d44d8d 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1708,8 +1708,13 @@ static struct worker *create_worker
orkqueues()
The struct kthread_worker kworker_creater is initialized earlier than
worker_pools in init_workqueues() so that kworker_creater_thread is
created than all early kworkers. Although the early kworkers are not
depends on kworker_creater_thread, but this initialization order makes
the p
manager is implemented inside
worker, using dedicated creater will make things more flexible.
So we offload the worker-management out from kworker into a single
dedicated creater kthread. It is done in patch2. And the patch1 is
preparation and patch3 is cleanup patch.
Lai Jiangshan (3
401 - 500 of 1247 matches
Mail list logo