[tip:x86/asm] kprobes, x86/alternatives: Use text_mutex to protect smp_alt_modules

2017-11-07 Thread tip-bot for Zhou Chengming
Commit-ID: e846d13958066828a9483d862cc8370a72fadbb6 Gitweb: https://git.kernel.org/tip/e846d13958066828a9483d862cc8370a72fadbb6 Author: Zhou Chengming <zhouchengmi...@huawei.com> AuthorDate: Thu, 2 Nov 2017 09:18:21 +0800 Committer: Ingo Molnar <mi...@kernel.org> CommitDate

[tip:x86/asm] kprobes, x86/alternatives: Use text_mutex to protect smp_alt_modules

2017-11-07 Thread tip-bot for Zhou Chengming
Commit-ID: e846d13958066828a9483d862cc8370a72fadbb6 Gitweb: https://git.kernel.org/tip/e846d13958066828a9483d862cc8370a72fadbb6 Author: Zhou Chengming AuthorDate: Thu, 2 Nov 2017 09:18:21 +0800 Committer: Ingo Molnar CommitDate: Tue, 7 Nov 2017 12:20:09 +0100 kprobes, x86/alternatives

[PATCH v4] kprobes, x86/alternatives: use text_mutex to protect smp_alt_modules

2017-11-01 Thread Zhou Chengming
Hiramatsu <mhira...@kernel.org> Acked-by: Steven Rostedt (VMware) <rost...@goodmis.org> Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com> --- arch/x86/kernel/alternative.c | 26 +- kernel/extable.c | 2 ++ 2 files changed, 15 insertio

[PATCH v4] kprobes, x86/alternatives: use text_mutex to protect smp_alt_modules

2017-11-01 Thread Zhou Chengming
Hiramatsu Acked-by: Steven Rostedt (VMware) Signed-off-by: Zhou Chengming --- arch/x86/kernel/alternative.c | 26 +- kernel/extable.c | 2 ++ 2 files changed, 15 insertions(+), 13 deletions(-) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alt

[PATCH v3] kprobes, x86/alternatives: use text_mutex to protect smp_alt_modules

2017-11-01 Thread Zhou Chengming
Hiramatsu <mhira...@kernel.org> Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com> --- arch/x86/kernel/alternative.c | 26 +- kernel/extable.c | 2 ++ 2 files changed, 15 insertions(+), 13 deletions(-) diff --git a/arch/x86/kernel/alternati

[PATCH v3] kprobes, x86/alternatives: use text_mutex to protect smp_alt_modules

2017-11-01 Thread Zhou Chengming
Hiramatsu Signed-off-by: Zhou Chengming --- arch/x86/kernel/alternative.c | 26 +- kernel/extable.c | 2 ++ 2 files changed, 15 insertions(+), 13 deletions(-) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 3344d33..3ad9

[PATCH v2] kprobes, x86/alternatives: use text_mutex to protect smp_alt_modules

2017-10-31 Thread Zhou Chengming
g> Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com> --- arch/x86/kernel/alternative.c | 24 +++- kernel/extable.c | 2 ++ 2 files changed, 13 insertions(+), 13 deletions(-) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c

[PATCH v2] kprobes, x86/alternatives: use text_mutex to protect smp_alt_modules

2017-10-31 Thread Zhou Chengming
k to solve this. But there is a simpler way to handle this problem. We can reuse the text_mutex to protect smp_alt_modules instead of using another mutex. And all the arch dependent checks of kprobes are inside the text_mutex, so it's safe now. Reviewed-by: Masami Hiramatsu Signed-off-by: Zhou

[PATCH 2/4] x86/alternatives: Don't need text_mutex when text_poke() on UP

2017-10-28 Thread Zhou Chengming
The alternatives_smp_lock/unlock only be used on UP, so we don't need to hold the text_mutex when text_poke(). Then in the next patch, we can remove the outside smp_alt mutex too. Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com> --- arch/x86/kernel/alternative.c | 4 1 file c

[PATCH 2/4] x86/alternatives: Don't need text_mutex when text_poke() on UP

2017-10-28 Thread Zhou Chengming
The alternatives_smp_lock/unlock only be used on UP, so we don't need to hold the text_mutex when text_poke(). Then in the next patch, we can remove the outside smp_alt mutex too. Signed-off-by: Zhou Chengming --- arch/x86/kernel/alternative.c | 4 1 file changed, 4 deletions(-) diff

[PATCH 4/4] kprobes, x86/alternatives: preempt_disable() when check smp_alt_modules

2017-10-28 Thread Zhou Chengming
s safe now. Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com> --- arch/x86/kernel/alternative.c | 13 ++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 7eab6f6..b278cad 100644 --- a

[PATCH 1/4] x86/alternatives: free smp_alt_modules when enable smp

2017-10-28 Thread Zhou Chengming
a mutex to protect the list, only need to use preempt_disable(). We can make sure smp_alt_modules will be useless after enable smp, so free it all. And alternatives_smp_module_del() can return directly when !uniproc_patched to avoid a list traversal. Signed-off-by: Zhou Chengming <zhouchen

[PATCH 4/4] kprobes, x86/alternatives: preempt_disable() when check smp_alt_modules

2017-10-28 Thread Zhou Chengming
s safe now. Signed-off-by: Zhou Chengming --- arch/x86/kernel/alternative.c | 13 ++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 7eab6f6..b278cad 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/

[PATCH 1/4] x86/alternatives: free smp_alt_modules when enable smp

2017-10-28 Thread Zhou Chengming
a mutex to protect the list, only need to use preempt_disable(). We can make sure smp_alt_modules will be useless after enable smp, so free it all. And alternatives_smp_module_del() can return directly when !uniproc_patched to avoid a list traversal. Signed-off-by: Zhou Chengming --- arch/x86/kernel

[PATCH 3/4] x86/alternatives: get rid of the smp_alt mutex

2017-10-28 Thread Zhou Chengming
Previous two patches can make sure the smp_alt_modules will only be used when UP, so we don't need a mutex to protect the list, we only need to preempt_disable() when traverse the list. Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com> --- arch/x86/kernel/alternative.

[PATCH 3/4] x86/alternatives: get rid of the smp_alt mutex

2017-10-28 Thread Zhou Chengming
Previous two patches can make sure the smp_alt_modules will only be used when UP, so we don't need a mutex to protect the list, we only need to preempt_disable() when traverse the list. Signed-off-by: Zhou Chengming --- arch/x86/kernel/alternative.c | 31 +++ 1 file

[PATCH] kprobes, x86/alternatives: use text_mutex to protect smp_alt_modules

2017-10-27 Thread Zhou Chengming
can reuse the text_mutex to protect smp_alt_modules instead of using another mutex. And all the arch dependent checks of kprobes are inside the text_mutex, so it's safe now. Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com> --- arch/x86/kernel/alternative.c | 24 +++-

[PATCH] kprobes, x86/alternatives: use text_mutex to protect smp_alt_modules

2017-10-27 Thread Zhou Chengming
can reuse the text_mutex to protect smp_alt_modules instead of using another mutex. And all the arch dependent checks of kprobes are inside the text_mutex, so it's safe now. Signed-off-by: Zhou Chengming --- arch/x86/kernel/alternative.c | 24 +++- 1 file changed, 11 inserti

[PATCH v3 2/2] kprobes: initialize probed_mod to NULL

2017-10-27 Thread Zhou Chengming
When check_kprobe_address_safe() return fail, the probed_mod should be set to NULL, because no module refcount held. And we initialize probed_mod to NULL in register_kprobe() for the same reason. Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com> --- kernel/kprobes.c | 3 ++-

[PATCH v3 1/2] kprobes: avoid the kprobe being re-registered

2017-10-27 Thread Zhou Chengming
and register the same kprobe. This patch put the check inside the mutex. Suggested-by: Masami Hiramatsu <mhira...@kernel.org> Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com> --- kernel/kprobes.c | 27 --- 1 file changed, 8 insertions(+), 19 deleti

[PATCH v3 2/2] kprobes: initialize probed_mod to NULL

2017-10-27 Thread Zhou Chengming
When check_kprobe_address_safe() return fail, the probed_mod should be set to NULL, because no module refcount held. And we initialize probed_mod to NULL in register_kprobe() for the same reason. Signed-off-by: Zhou Chengming --- kernel/kprobes.c | 3 ++- 1 file changed, 2 insertions(+), 1

[PATCH v3 1/2] kprobes: avoid the kprobe being re-registered

2017-10-27 Thread Zhou Chengming
and register the same kprobe. This patch put the check inside the mutex. Suggested-by: Masami Hiramatsu Signed-off-by: Zhou Chengming --- kernel/kprobes.c | 27 --- 1 file changed, 8 insertions(+), 19 deletions(-) diff --git a/kernel/kprobes.c b/kernel/kprobes.c index

[PATCH] x86/alternatives: free smp_alt_modules when enable smp

2017-10-27 Thread Zhou Chengming
. And alternatives_smp_module_del() can return directly when !uniproc_patched to avoid a list traversal. Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com> --- arch/x86/kernel/alternative.c | 11 +-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/alternative.c b/ar

[PATCH] x86/alternatives: free smp_alt_modules when enable smp

2017-10-27 Thread Zhou Chengming
. And alternatives_smp_module_del() can return directly when !uniproc_patched to avoid a list traversal. Signed-off-by: Zhou Chengming --- arch/x86/kernel/alternative.c | 11 +-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index

[PATCH v2] kprobes: avoid the kprobe being re-registered

2017-10-26 Thread Zhou Chengming
been registered already, but check_kprobe_rereg() will release the kprobe_mutex then, so maybe two paths will pass the check and register the same kprobe. This patch put the check inside the mutex. Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com> --- kernel/kprobes.

[PATCH v2] kprobes: avoid the kprobe being re-registered

2017-10-26 Thread Zhou Chengming
been registered already, but check_kprobe_rereg() will release the kprobe_mutex then, so maybe two paths will pass the check and register the same kprobe. This patch put the check inside the mutex. Signed-off-by: Zhou Chengming --- kernel/kprobes.c | 28 +--- 1 file

[PATCH] kprobes: avoid the kprobe being re-registered

2017-10-26 Thread Zhou Chengming
Old code use check_kprobe_rereg() to check if the kprobe has been registered already, but check_kprobe_rereg() will release the kprobe_mutex then, so maybe two paths will pass the check and register the same kprobe. This patch put the check inside the mutex. Signed-off-by: Zhou Chengming

[PATCH] kprobes: avoid the kprobe being re-registered

2017-10-26 Thread Zhou Chengming
Old code use check_kprobe_rereg() to check if the kprobe has been registered already, but check_kprobe_rereg() will release the kprobe_mutex then, so maybe two paths will pass the check and register the same kprobe. This patch put the check inside the mutex. Signed-off-by: Zhou Chengming

[PATCH] sched/rt.c: pick and check task if double_lock_balance() unlock the rq

2017-09-11 Thread Zhou Chengming
make sure the task_A is still on the rq1, even though we hold the rq1->lock. This patch will repick the first pushable task to be sure the task is still on the rq. Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com> --- kernel/sched/rt.c | 49 +++

[PATCH] sched/rt.c: pick and check task if double_lock_balance() unlock the rq

2017-09-11 Thread Zhou Chengming
make sure the task_A is still on the rq1, even though we hold the rq1->lock. This patch will repick the first pushable task to be sure the task is still on the rq. Signed-off-by: Zhou Chengming --- kernel/sched/rt.c | 49 +++-- 1 file changed, 23 inser

[tip:perf/urgent] perf/ftrace: Fix double traces of perf on ftrace:function

2017-08-29 Thread tip-bot for Zhou Chengming
Commit-ID: 75e8387685f6c65feb195a4556110b58f852b848 Gitweb: http://git.kernel.org/tip/75e8387685f6c65feb195a4556110b58f852b848 Author: Zhou Chengming <zhouchengmi...@huawei.com> AuthorDate: Fri, 25 Aug 2017 21:49:37 +0800 Committer: Ingo Molnar <mi...@kernel.org> CommitDate:

[tip:perf/urgent] perf/ftrace: Fix double traces of perf on ftrace:function

2017-08-29 Thread tip-bot for Zhou Chengming
Commit-ID: 75e8387685f6c65feb195a4556110b58f852b848 Gitweb: http://git.kernel.org/tip/75e8387685f6c65feb195a4556110b58f852b848 Author: Zhou Chengming AuthorDate: Fri, 25 Aug 2017 21:49:37 +0800 Committer: Ingo Molnar CommitDate: Tue, 29 Aug 2017 13:29:29 +0200 perf/ftrace: Fix double

[PATCH] tracing: make dynamic types can use __TRACE_LAST_TYPE

2017-08-27 Thread Zhou Chengming
Obviously, trace_events that defined staticly in trace.h won't use __TRACE_LAST_TYPE, so make dynamic types can use it. And some minor changes to trace_search_list() to make code clearer. Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com> --- kernel/trace/trace_output.

[PATCH] tracing: make dynamic types can use __TRACE_LAST_TYPE

2017-08-27 Thread Zhou Chengming
Obviously, trace_events that defined staticly in trace.h won't use __TRACE_LAST_TYPE, so make dynamic types can use it. And some minor changes to trace_search_list() to make code clearer. Signed-off-by: Zhou Chengming --- kernel/trace/trace_output.c | 12 ++-- 1 file changed, 6

[PATCH] perf/ftrace: fix doubled traces of perf on ftrace:function

2017-08-25 Thread Zhou Chengming
's not NULL. Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com> --- include/linux/perf_event.h | 2 +- include/linux/trace_events.h| 4 ++-- kernel/events/core.c| 13 + kernel/trace/trace_event_perf.c | 4 +++- kernel/trace/trace_kprobe

[PATCH] perf/ftrace: fix doubled traces of perf on ftrace:function

2017-08-25 Thread Zhou Chengming
's not NULL. Signed-off-by: Zhou Chengming --- include/linux/perf_event.h | 2 +- include/linux/trace_events.h| 4 ++-- kernel/events/core.c| 13 + kernel/trace/trace_event_perf.c | 4 +++- kernel/trace/trace_kprobe.c | 4 ++-- kernel/trace/trace_syscal

[PATCH] module: fix ddebug_remove_module()

2017-07-06 Thread Zhou Chengming
cial, it may contain _ddebugs of other modules, the modname of which is different from the name of livepatch module. So ddebug_remove_module() can't use mod->name to find the right ddebug_table and remove it. It can cause kernel crash when we cat the file /dynamic_debug/control. Signed-off-by:

[PATCH] module: fix ddebug_remove_module()

2017-07-06 Thread Zhou Chengming
cial, it may contain _ddebugs of other modules, the modname of which is different from the name of livepatch module. So ddebug_remove_module() can't use mod->name to find the right ddebug_table and remove it. It can cause kernel crash when we cat the file /dynamic_debug/control. Signed-off-by:

[PATCH] perf/core: make sure group events are for the same cpu

2017-06-17 Thread Zhou Chengming
The else branch are broken for taskctx: two events can on the same taskctx, but on different cpu. This patch fix it, we don't need to check move_group. We first make sure we're on the same task, or both per-cpu events, and then make sure we're events for the same cpu. Signed-off-by: Zhou

[PATCH] perf/core: make sure group events are for the same cpu

2017-06-17 Thread Zhou Chengming
The else branch are broken for taskctx: two events can on the same taskctx, but on different cpu. This patch fix it, we don't need to check move_group. We first make sure we're on the same task, or both per-cpu events, and then make sure we're events for the same cpu. Signed-off-by: Zhou

[PATCH v2] livepatch: Reduce the time of finding module symbols

2017-03-28 Thread Zhou Chengming
changes it to use module_kallsyms_on_each_symbol() for modules symbols. After we apply this patch, the sys time reduced dramatically. ~ time sudo insmod klp.ko real0m1.007s user0m0.032s sys 0m0.924s Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com> --- kernel/livepatch/core

[PATCH v2] livepatch: Reduce the time of finding module symbols

2017-03-28 Thread Zhou Chengming
changes it to use module_kallsyms_on_each_symbol() for modules symbols. After we apply this patch, the sys time reduced dramatically. ~ time sudo insmod klp.ko real0m1.007s user0m0.032s sys 0m0.924s Signed-off-by: Zhou Chengming --- kernel/livepatch/core.c | 5 - 1 file changed, 4

[PATCH] reduce the time of finding symbols for module

2017-03-27 Thread Zhou Chengming
symbols, so will waste a lot of time. This patch changes it to use module_kallsyms_on_each_symbol() for modules symbols. After we apply this patch, the sys time reduced dramatically. ~ time sudo insmod klp.ko real0m1.007s user0m0.032s sys 0m0.924s Signed-off-by: Zhou Chengming

[PATCH] reduce the time of finding symbols for module

2017-03-27 Thread Zhou Chengming
symbols, so will waste a lot of time. This patch changes it to use module_kallsyms_on_each_symbol() for modules symbols. After we apply this patch, the sys time reduced dramatically. ~ time sudo insmod klp.ko real0m1.007s user0m0.032s sys 0m0.924s Signed-off-by: Zhou Chengming

[PATCH v3] don't forget to call pd_online_fn when activate policy

2017-03-10 Thread Zhou Chengming
When we activate policy on the request_queue, we will create policy_date for all the existing blkgs of the request_queue, so we should call pd_init_fn() and pd_online_fn() on these newly created policy_data. Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com> --- block/blk-cgroup

[PATCH v3] don't forget to call pd_online_fn when activate policy

2017-03-10 Thread Zhou Chengming
When we activate policy on the request_queue, we will create policy_date for all the existing blkgs of the request_queue, so we should call pd_init_fn() and pd_online_fn() on these newly created policy_data. Signed-off-by: Zhou Chengming --- block/blk-cgroup.c | 6 ++ 1 file changed, 6

[PATCH v2] don't forget to call pd_online_fn when activate policy

2017-03-08 Thread Zhou Chengming
When we activate policy on the request_queue, we will create policy_date for all the existing blkgs of the request_queue, so we should call pd_init_fn() and pd_online_fn() on these newly created policy_data. Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com> --- block/blk-cgroup

[PATCH v2] don't forget to call pd_online_fn when activate policy

2017-03-08 Thread Zhou Chengming
When we activate policy on the request_queue, we will create policy_date for all the existing blkgs of the request_queue, so we should call pd_init_fn() and pd_online_fn() on these newly created policy_data. Signed-off-by: Zhou Chengming --- block/blk-cgroup.c | 6 ++ 1 file changed, 6

[PATCH] don't forget to call pd_online_fn when activate policy

2017-03-07 Thread Zhou Chengming
From: z00354408 Signed-off-by: z00354408 --- block/blk-cgroup.c | 6 ++ 1 file changed, 6 insertions(+) diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index 8ba0af7..0dd9e76 100644 --- a/block/blk-cgroup.c +++

[PATCH] don't forget to call pd_online_fn when activate policy

2017-03-07 Thread Zhou Chengming
From: z00354408 Signed-off-by: z00354408 --- block/blk-cgroup.c | 6 ++ 1 file changed, 6 insertions(+) diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index 8ba0af7..0dd9e76 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -1254,6 +1254,12 @@ int

[tip:sched/core] sched/Documentation/sched-rt-group: Fix incorrect example

2017-01-22 Thread tip-bot for Zhou Chengming
Commit-ID: 3a09b8d45b3c05d49e581831de626927c37599f8 Gitweb: http://git.kernel.org/tip/3a09b8d45b3c05d49e581831de626927c37599f8 Author: Zhou Chengming <zhouchengmi...@huawei.com> AuthorDate: Sun, 22 Jan 2017 15:22:35 +0800 Committer: Ingo Molnar <mi...@kernel.org> CommitDate:

[tip:sched/core] sched/Documentation/sched-rt-group: Fix incorrect example

2017-01-22 Thread tip-bot for Zhou Chengming
Commit-ID: 3a09b8d45b3c05d49e581831de626927c37599f8 Gitweb: http://git.kernel.org/tip/3a09b8d45b3c05d49e581831de626927c37599f8 Author: Zhou Chengming AuthorDate: Sun, 22 Jan 2017 15:22:35 +0800 Committer: Ingo Molnar CommitDate: Sun, 22 Jan 2017 10:34:17 +0100 sched/Documentation

[PATCH] sched: Documentation: sched-rt-group: fix example error

2017-01-21 Thread Zhou Chengming
A should be 5us, then the period and runtime of group B should be 5us and 25000us. Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com> --- Documentation/scheduler/sched-rt-group.txt |8 1 files changed, 4 insertions(+), 4 deletions(-) diff --git a/Documen

[PATCH] sched: Documentation: sched-rt-group: fix example error

2017-01-21 Thread Zhou Chengming
A should be 5us, then the period and runtime of group B should be 5us and 25000us. Signed-off-by: Zhou Chengming --- Documentation/scheduler/sched-rt-group.txt |8 1 files changed, 4 insertions(+), 4 deletions(-) diff --git a/Documentation/scheduler/sched-rt-group.txt b

[tip:perf/urgent] perf/x86/intel: Handle exclusive threadid correctly on CPU hotplug

2017-01-17 Thread tip-bot for Zhou Chengming
Commit-ID: 4e71de7986386d5fd3765458f27d612931f27f5e Gitweb: http://git.kernel.org/tip/4e71de7986386d5fd3765458f27d612931f27f5e Author: Zhou Chengming <zhouchengmi...@huawei.com> AuthorDate: Mon, 16 Jan 2017 11:21:11 +0800 Committer: Thomas Gleixner <t...@linutronix.de> Com

[tip:perf/urgent] perf/x86/intel: Handle exclusive threadid correctly on CPU hotplug

2017-01-17 Thread tip-bot for Zhou Chengming
Commit-ID: 4e71de7986386d5fd3765458f27d612931f27f5e Gitweb: http://git.kernel.org/tip/4e71de7986386d5fd3765458f27d612931f27f5e Author: Zhou Chengming AuthorDate: Mon, 16 Jan 2017 11:21:11 +0800 Committer: Thomas Gleixner CommitDate: Tue, 17 Jan 2017 11:08:36 +0100 perf/x86/intel

[PATCH] fix race caused by hyperthreads when online an offline cpu

2017-01-15 Thread Zhou Chengming
tarted) spin_lock // not executed intel_stop_scheduling() set state->sched_started = false if (!state->sched_started) spin_unlock// excuted Signed-off-by: NuoHan Qiao <qiaonuo...@huawei.com> S

[PATCH] fix race caused by hyperthreads when online an offline cpu

2017-01-15 Thread Zhou Chengming
tarted) spin_lock // not executed intel_stop_scheduling() set state->sched_started = false if (!state->sched_started) spin_unlock// excuted Signed-off-by: NuoHan Qiao Signed-off-by: Zhou Chengming -

[PATCH] fix race caused by hyperthreads when online an offline cpu

2017-01-14 Thread Zhou Chengming
tarted) spin_lock // not executed intel_stop_scheduling() set state->sched_started = false if (!state->sched_started) spin_unlock// excuted Signed-off-by: NuoHan Qiao <qiaonuo...@huawei.com> S

[PATCH] fix race caused by hyperthreads when online an offline cpu

2017-01-14 Thread Zhou Chengming
tarted) spin_lock // not executed intel_stop_scheduling() set state->sched_started = false if (!state->sched_started) spin_unlock// excuted Signed-off-by: NuoHan Qiao Signed-off-by: Zhou Chengming -

[PATCH] fix race caused by hyperthreads when online an offline cpu

2017-01-12 Thread Zhou Chengming
tarted) spin_lock // not executed intel_stop_scheduling() set state->sched_started = false if (!state->sched_started) spin_unlock// excuted Signed-off-by: NuoHan Qiao <qiaonuo...@huawei.com> Signed-off-

[PATCH] fix race caused by hyperthreads when online an offline cpu

2017-01-12 Thread Zhou Chengming
tarted) spin_lock // not executed intel_stop_scheduling() set state->sched_started = false if (!state->sched_started) spin_unlock// excuted Signed-off-by: NuoHan Qiao Signed-off-by: Zhou Chengming --- arc

[PATCH v2] Drop reference added by grab_header

2017-01-05 Thread Zhou Chengming
s-security/2016/11/04/13 Reported-by: CAI Qian <caiq...@redhat.com> Tested-by: Yang Shukui <yangshu...@huawei.com> Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com> --- fs/proc/proc_sysctl.c |3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) diff --git a/fs/

[PATCH v2] Drop reference added by grab_header

2017-01-05 Thread Zhou Chengming
s-security/2016/11/04/13 Reported-by: CAI Qian Tested-by: Yang Shukui Signed-off-by: Zhou Chengming --- fs/proc/proc_sysctl.c |3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c index 5d931bf..c4c90bd 100644 --- a/fs/proc/pro

[PATCH] Drop reference added by grab_header

2017-01-05 Thread Zhou Chengming
Fixes CVE-2016-9191. Reported-by: CAI Qian <caiq...@redhat.com> Tested-by: Yang Shukui <yangshu...@huawei.com> Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com> --- fs/proc/proc_sysctl.c |3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) diff --git a/fs/proc/

[PATCH] Drop reference added by grab_header

2017-01-05 Thread Zhou Chengming
Fixes CVE-2016-9191. Reported-by: CAI Qian Tested-by: Yang Shukui Signed-off-by: Zhou Chengming --- fs/proc/proc_sysctl.c |3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c index 5d931bf..c4c90bd 100644 --- a/fs/proc

[PATCH] tracing: Allow wakeup_dl tracer to be used by instances

2016-11-13 Thread Zhou Chengming
Allow wakeup_dl tracer to be used by instances, like wakeup tracer and wakeup_rt tracer. Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com> --- kernel/trace/trace_sched_wakeup.c |1 + 1 files changed, 1 insertions(+), 0 deletions(-) diff --git a/kernel/trace/trace_sched_wakeu

[PATCH] tracing: Allow wakeup_dl tracer to be used by instances

2016-11-13 Thread Zhou Chengming
Allow wakeup_dl tracer to be used by instances, like wakeup tracer and wakeup_rt tracer. Signed-off-by: Zhou Chengming --- kernel/trace/trace_sched_wakeup.c |1 + 1 files changed, 1 insertions(+), 0 deletions(-) diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace

[PATCH] update sc->nr_reclaimed after each shrink_slab

2016-07-21 Thread Zhou Chengming
In !global_reclaim(sc) case, we should update sc->nr_reclaimed after each shrink_slab in the loop. Because we need the correct sc->nr_reclaimed value to see if we can break out. Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com> --- mm/vmscan.c |5 + 1 files changed,

[PATCH] update sc->nr_reclaimed after each shrink_slab

2016-07-21 Thread Zhou Chengming
In !global_reclaim(sc) case, we should update sc->nr_reclaimed after each shrink_slab in the loop. Because we need the correct sc->nr_reclaimed value to see if we can break out. Signed-off-by: Zhou Chengming --- mm/vmscan.c |5 + 1 files changed, 5 insertions(+), 0 deletions(-)

[PATCH] make __section_nr more efficient

2016-07-19 Thread Zhou Chengming
When CONFIG_SPARSEMEM_EXTREME is disabled, __section_nr can get the section number with a subtraction directly. Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com> --- mm/sparse.c | 12 +++- 1 files changed, 7 insertions(+), 5 deletions(-) diff --git a/mm/sparse.c b/mm/sp

[PATCH] make __section_nr more efficient

2016-07-19 Thread Zhou Chengming
When CONFIG_SPARSEMEM_EXTREME is disabled, __section_nr can get the section number with a subtraction directly. Signed-off-by: Zhou Chengming --- mm/sparse.c | 12 +++- 1 files changed, 7 insertions(+), 5 deletions(-) diff --git a/mm/sparse.c b/mm/sparse.c index 5d0cf45..36d7bbb

[PATCH] sched: fix the calculation of __sched_period in sched_slice()

2016-05-08 Thread Zhou Chengming
t;nr_running to calculate the whole __sched_period value. Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com> --- kernel/sched/fair.c |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 0fe30e6..59c9378 100644 --- a/kernel/

[PATCH] sched: fix the calculation of __sched_period in sched_slice()

2016-05-08 Thread Zhou Chengming
t;nr_running to calculate the whole __sched_period value. Signed-off-by: Zhou Chengming --- kernel/sched/fair.c |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 0fe30e6..59c9378 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair

[PATCH v4] ksm: fix conflict between mmput and scan_get_next_rmap_item

2016-05-08 Thread Zhou Chengming
.count to become -1. >From the suggestion of Andrea Arcangeli, unmerge_and_remove_all_rmap_items has the same SMP race condition, so fix it too. My prev fix in function scan_get_next_rmap_item will introduce a different SMP race condition, so just invert the up_read/spin_unlock order as Andrea Arcange

[PATCH v4] ksm: fix conflict between mmput and scan_get_next_rmap_item

2016-05-08 Thread Zhou Chengming
.count to become -1. >From the suggestion of Andrea Arcangeli, unmerge_and_remove_all_rmap_items has the same SMP race condition, so fix it too. My prev fix in function scan_get_next_rmap_item will introduce a different SMP race condition, so just invert the up_read/spin_unlock order as Andrea Arcange

[PATCH v3] ksm: fix conflict between mmput and scan_get_next_rmap_item

2016-05-08 Thread Zhou Chengming
.count to become -1. >From the suggestion of Andrea Arcangeli, unmerge_and_remove_all_rmap_items has the same SMP race condition, so fix it too. My prev fix in function scan_get_next_rmap_item will introduce a different SMP race condition, so just invert the up_read/spin_unlock order as Andrea Arcange

[PATCH v3] ksm: fix conflict between mmput and scan_get_next_rmap_item

2016-05-08 Thread Zhou Chengming
.count to become -1. >From the suggestion of Andrea Arcangeli, unmerge_and_remove_all_rmap_items has the same SMP race condition, so fix it too. My prev fix in function scan_get_next_rmap_item will introduce a different SMP race condition, so just invert the up_read/spin_unlock order as Andrea Arcange

[PATCH v2] ksm: fix conflict between mmput and scan_get_next_rmap_item

2016-05-05 Thread Zhou Chengming
.count to become -1. >From the suggestion of Andrea Arcangeli, unmerge_and_remove_all_rmap_items has the same SMP race condition, so fix it too. My prev fix in function scan_get_next_rmap_item will introduce a different SMP race condition, so just invert the up_read/spin_unlock order as Andrea Arcange

[PATCH v2] ksm: fix conflict between mmput and scan_get_next_rmap_item

2016-05-05 Thread Zhou Chengming
.count to become -1. >From the suggestion of Andrea Arcangeli, unmerge_and_remove_all_rmap_items has the same SMP race condition, so fix it too. My prev fix in function scan_get_next_rmap_item will introduce a different SMP race condition, so just invert the up_read/spin_unlock order as Andrea Arcange

[PATCH] ksm: fix conflict between mmput and scan_get_next_rmap_item

2016-05-05 Thread Zhou Chengming
nt to become -1. I changed the scan_get_next_rmap_item function refered to the khugepaged scan function. Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com> --- mm/ksm.c |7 ++- 1 files changed, 2 insertions(+), 5 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index 7ee101e..6e4324d 1006

[PATCH] ksm: fix conflict between mmput and scan_get_next_rmap_item

2016-05-05 Thread Zhou Chengming
nt to become -1. I changed the scan_get_next_rmap_item function refered to the khugepaged scan function. Signed-off-by: Zhou Chengming --- mm/ksm.c |7 ++- 1 files changed, 2 insertions(+), 5 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index 7ee101e..6e4324d 100644 --- a/mm/ksm.c +++ b/m

[PATCH v2] livepatch: x86: bugfix about kASLR

2015-11-05 Thread Zhou Chengming
When enable KASLR, livepatch will adjust old_addr of changed function accordingly. So do the same thing for reloc. [PATCH v1] https://lkml.org/lkml/2015/11/4/91 Reported-by: Cyril B. Signed-off-by: Zhou Chengming --- kernel/livepatch/core.c |6 ++ 1 files changed, 6 insertions(+), 0

[PATCH v2] livepatch: x86: bugfix about kASLR

2015-11-05 Thread Zhou Chengming
When enable KASLR, livepatch will adjust old_addr of changed function accordingly. So do the same thing for reloc. [PATCH v1] https://lkml.org/lkml/2015/11/4/91 Reported-by: Cyril B. <c...@alwaysdata.com> Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com> --- kernel/live

[PATCH] livepatch: x86: bugfix about kASLR

2015-11-04 Thread Zhou Chengming
When enable KASLR, func->old_addr will be set to zero and livepatch will find the right old address. But for reloc, livepatch just verify it using reloc->val (old addr from user), so verify failed and report "kernel mismatch" error. Reported-by: Cyril B. Signed-off-by

[PATCH] livepatch: x86: bugfix about kASLR

2015-11-04 Thread Zhou Chengming
When enable KASLR, func->old_addr will be set to zero and livepatch will find the right old address. But for reloc, livepatch just verify it using reloc->val (old addr from user), so verify failed and report "kernel mismatch" error. Reported-by: Cyril B. <c...@alwaysdata.com&g