Commit-ID: e846d13958066828a9483d862cc8370a72fadbb6
Gitweb: https://git.kernel.org/tip/e846d13958066828a9483d862cc8370a72fadbb6
Author: Zhou Chengming <zhouchengmi...@huawei.com>
AuthorDate: Thu, 2 Nov 2017 09:18:21 +0800
Committer: Ingo Molnar <mi...@kernel.org>
CommitDate
Commit-ID: e846d13958066828a9483d862cc8370a72fadbb6
Gitweb: https://git.kernel.org/tip/e846d13958066828a9483d862cc8370a72fadbb6
Author: Zhou Chengming
AuthorDate: Thu, 2 Nov 2017 09:18:21 +0800
Committer: Ingo Molnar
CommitDate: Tue, 7 Nov 2017 12:20:09 +0100
kprobes, x86/alternatives
Hiramatsu <mhira...@kernel.org>
Acked-by: Steven Rostedt (VMware) <rost...@goodmis.org>
Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com>
---
arch/x86/kernel/alternative.c | 26 +-
kernel/extable.c | 2 ++
2 files changed, 15 insertio
Hiramatsu
Acked-by: Steven Rostedt (VMware)
Signed-off-by: Zhou Chengming
---
arch/x86/kernel/alternative.c | 26 +-
kernel/extable.c | 2 ++
2 files changed, 15 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alt
Hiramatsu <mhira...@kernel.org>
Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com>
---
arch/x86/kernel/alternative.c | 26 +-
kernel/extable.c | 2 ++
2 files changed, 15 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kernel/alternati
Hiramatsu
Signed-off-by: Zhou Chengming
---
arch/x86/kernel/alternative.c | 26 +-
kernel/extable.c | 2 ++
2 files changed, 15 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 3344d33..3ad9
g>
Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com>
---
arch/x86/kernel/alternative.c | 24 +++-
kernel/extable.c | 2 ++
2 files changed, 13 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
k to solve this.
But there is a simpler way to handle this problem. We can reuse the
text_mutex to protect smp_alt_modules instead of using another mutex.
And all the arch dependent checks of kprobes are inside the text_mutex,
so it's safe now.
Reviewed-by: Masami Hiramatsu
Signed-off-by: Zhou
The alternatives_smp_lock/unlock only be used on UP, so we don't
need to hold the text_mutex when text_poke(). Then in the next patch,
we can remove the outside smp_alt mutex too.
Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com>
---
arch/x86/kernel/alternative.c | 4
1 file c
The alternatives_smp_lock/unlock only be used on UP, so we don't
need to hold the text_mutex when text_poke(). Then in the next patch,
we can remove the outside smp_alt mutex too.
Signed-off-by: Zhou Chengming
---
arch/x86/kernel/alternative.c | 4
1 file changed, 4 deletions(-)
diff
s safe now.
Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com>
---
arch/x86/kernel/alternative.c | 13 ++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 7eab6f6..b278cad 100644
--- a
a mutex to protect
the list, only need to use preempt_disable().
We can make sure smp_alt_modules will be useless after enable smp,
so free it all. And alternatives_smp_module_del() can return directly
when !uniproc_patched to avoid a list traversal.
Signed-off-by: Zhou Chengming <zhouchen
s safe now.
Signed-off-by: Zhou Chengming
---
arch/x86/kernel/alternative.c | 13 ++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 7eab6f6..b278cad 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/
a mutex to protect
the list, only need to use preempt_disable().
We can make sure smp_alt_modules will be useless after enable smp,
so free it all. And alternatives_smp_module_del() can return directly
when !uniproc_patched to avoid a list traversal.
Signed-off-by: Zhou Chengming
---
arch/x86/kernel
Previous two patches can make sure the smp_alt_modules will only
be used when UP, so we don't need a mutex to protect the list,
we only need to preempt_disable() when traverse the list.
Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com>
---
arch/x86/kernel/alternative.
Previous two patches can make sure the smp_alt_modules will only
be used when UP, so we don't need a mutex to protect the list,
we only need to preempt_disable() when traverse the list.
Signed-off-by: Zhou Chengming
---
arch/x86/kernel/alternative.c | 31 +++
1 file
can reuse the
text_mutex to protect smp_alt_modules instead of using another mutex.
And all the arch dependent checks of kprobes are inside the text_mutex,
so it's safe now.
Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com>
---
arch/x86/kernel/alternative.c | 24 +++-
can reuse the
text_mutex to protect smp_alt_modules instead of using another mutex.
And all the arch dependent checks of kprobes are inside the text_mutex,
so it's safe now.
Signed-off-by: Zhou Chengming
---
arch/x86/kernel/alternative.c | 24 +++-
1 file changed, 11 inserti
When check_kprobe_address_safe() return fail, the probed_mod
should be set to NULL, because no module refcount held. And we
initialize probed_mod to NULL in register_kprobe() for the same reason.
Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com>
---
kernel/kprobes.c | 3 ++-
and
register the same kprobe. This patch put the check inside the mutex.
Suggested-by: Masami Hiramatsu <mhira...@kernel.org>
Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com>
---
kernel/kprobes.c | 27 ---
1 file changed, 8 insertions(+), 19 deleti
When check_kprobe_address_safe() return fail, the probed_mod
should be set to NULL, because no module refcount held. And we
initialize probed_mod to NULL in register_kprobe() for the same reason.
Signed-off-by: Zhou Chengming
---
kernel/kprobes.c | 3 ++-
1 file changed, 2 insertions(+), 1
and
register the same kprobe. This patch put the check inside the mutex.
Suggested-by: Masami Hiramatsu
Signed-off-by: Zhou Chengming
---
kernel/kprobes.c | 27 ---
1 file changed, 8 insertions(+), 19 deletions(-)
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index
. And alternatives_smp_module_del() can return directly
when !uniproc_patched to avoid a list traversal.
Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com>
---
arch/x86/kernel/alternative.c | 11 +--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/alternative.c b/ar
. And alternatives_smp_module_del() can return directly
when !uniproc_patched to avoid a list traversal.
Signed-off-by: Zhou Chengming
---
arch/x86/kernel/alternative.c | 11 +--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index
been
registered already, but check_kprobe_rereg() will release the
kprobe_mutex then, so maybe two paths will pass the check and
register the same kprobe. This patch put the check inside the mutex.
Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com>
---
kernel/kprobes.
been
registered already, but check_kprobe_rereg() will release the
kprobe_mutex then, so maybe two paths will pass the check and
register the same kprobe. This patch put the check inside the mutex.
Signed-off-by: Zhou Chengming
---
kernel/kprobes.c | 28 +---
1 file
Old code use check_kprobe_rereg() to check if the kprobe has been
registered already, but check_kprobe_rereg() will release the
kprobe_mutex then, so maybe two paths will pass the check and
register the same kprobe. This patch put the check inside the mutex.
Signed-off-by: Zhou Chengming
Old code use check_kprobe_rereg() to check if the kprobe has been
registered already, but check_kprobe_rereg() will release the
kprobe_mutex then, so maybe two paths will pass the check and
register the same kprobe. This patch put the check inside the mutex.
Signed-off-by: Zhou Chengming
make sure the task_A is
still on the rq1, even though we hold the rq1->lock. This patch will
repick the first pushable task to be sure the task is still on the rq.
Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com>
---
kernel/sched/rt.c | 49 +++
make sure the task_A is
still on the rq1, even though we hold the rq1->lock. This patch will
repick the first pushable task to be sure the task is still on the rq.
Signed-off-by: Zhou Chengming
---
kernel/sched/rt.c | 49 +++--
1 file changed, 23 inser
Commit-ID: 75e8387685f6c65feb195a4556110b58f852b848
Gitweb: http://git.kernel.org/tip/75e8387685f6c65feb195a4556110b58f852b848
Author: Zhou Chengming <zhouchengmi...@huawei.com>
AuthorDate: Fri, 25 Aug 2017 21:49:37 +0800
Committer: Ingo Molnar <mi...@kernel.org>
CommitDate:
Commit-ID: 75e8387685f6c65feb195a4556110b58f852b848
Gitweb: http://git.kernel.org/tip/75e8387685f6c65feb195a4556110b58f852b848
Author: Zhou Chengming
AuthorDate: Fri, 25 Aug 2017 21:49:37 +0800
Committer: Ingo Molnar
CommitDate: Tue, 29 Aug 2017 13:29:29 +0200
perf/ftrace: Fix double
Obviously, trace_events that defined staticly in trace.h won't use
__TRACE_LAST_TYPE, so make dynamic types can use it. And some
minor changes to trace_search_list() to make code clearer.
Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com>
---
kernel/trace/trace_output.
Obviously, trace_events that defined staticly in trace.h won't use
__TRACE_LAST_TYPE, so make dynamic types can use it. And some
minor changes to trace_search_list() to make code clearer.
Signed-off-by: Zhou Chengming
---
kernel/trace/trace_output.c | 12 ++--
1 file changed, 6
's not NULL.
Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com>
---
include/linux/perf_event.h | 2 +-
include/linux/trace_events.h| 4 ++--
kernel/events/core.c| 13 +
kernel/trace/trace_event_perf.c | 4 +++-
kernel/trace/trace_kprobe
's not NULL.
Signed-off-by: Zhou Chengming
---
include/linux/perf_event.h | 2 +-
include/linux/trace_events.h| 4 ++--
kernel/events/core.c| 13 +
kernel/trace/trace_event_perf.c | 4 +++-
kernel/trace/trace_kprobe.c | 4 ++--
kernel/trace/trace_syscal
cial, it may contain _ddebugs of other
modules, the modname of which is different from the name of livepatch
module. So ddebug_remove_module() can't use mod->name to find the
right ddebug_table and remove it. It can cause kernel crash when we cat
the file /dynamic_debug/control.
Signed-off-by:
cial, it may contain _ddebugs of other
modules, the modname of which is different from the name of livepatch
module. So ddebug_remove_module() can't use mod->name to find the
right ddebug_table and remove it. It can cause kernel crash when we cat
the file /dynamic_debug/control.
Signed-off-by:
The else branch are broken for taskctx: two events can on the same
taskctx, but on different cpu. This patch fix it, we don't need to
check move_group. We first make sure we're on the same task, or both
per-cpu events, and then make sure we're events for the same cpu.
Signed-off-by: Zhou
The else branch are broken for taskctx: two events can on the same
taskctx, but on different cpu. This patch fix it, we don't need to
check move_group. We first make sure we're on the same task, or both
per-cpu events, and then make sure we're events for the same cpu.
Signed-off-by: Zhou
changes it to use module_kallsyms_on_each_symbol() for modules
symbols. After we apply this patch, the sys time reduced dramatically.
~ time sudo insmod klp.ko
real0m1.007s
user0m0.032s
sys 0m0.924s
Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com>
---
kernel/livepatch/core
changes it to use module_kallsyms_on_each_symbol() for modules
symbols. After we apply this patch, the sys time reduced dramatically.
~ time sudo insmod klp.ko
real0m1.007s
user0m0.032s
sys 0m0.924s
Signed-off-by: Zhou Chengming
---
kernel/livepatch/core.c | 5 -
1 file changed, 4
symbols, so will waste
a lot of time. This patch changes it to use module_kallsyms_on_each_symbol()
for modules symbols.
After we apply this patch, the sys time reduced dramatically.
~ time sudo insmod klp.ko
real0m1.007s
user0m0.032s
sys 0m0.924s
Signed-off-by: Zhou Chengming
symbols, so will waste
a lot of time. This patch changes it to use module_kallsyms_on_each_symbol()
for modules symbols.
After we apply this patch, the sys time reduced dramatically.
~ time sudo insmod klp.ko
real0m1.007s
user0m0.032s
sys 0m0.924s
Signed-off-by: Zhou Chengming
When we activate policy on the request_queue, we will create policy_date
for all the existing blkgs of the request_queue, so we should call
pd_init_fn() and pd_online_fn() on these newly created policy_data.
Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com>
---
block/blk-cgroup
When we activate policy on the request_queue, we will create policy_date
for all the existing blkgs of the request_queue, so we should call
pd_init_fn() and pd_online_fn() on these newly created policy_data.
Signed-off-by: Zhou Chengming
---
block/blk-cgroup.c | 6 ++
1 file changed, 6
When we activate policy on the request_queue, we will create policy_date
for all the existing blkgs of the request_queue, so we should call
pd_init_fn() and pd_online_fn() on these newly created policy_data.
Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com>
---
block/blk-cgroup
When we activate policy on the request_queue, we will create policy_date
for all the existing blkgs of the request_queue, so we should call
pd_init_fn() and pd_online_fn() on these newly created policy_data.
Signed-off-by: Zhou Chengming
---
block/blk-cgroup.c | 6 ++
1 file changed, 6
From: z00354408
Signed-off-by: z00354408
---
block/blk-cgroup.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 8ba0af7..0dd9e76 100644
--- a/block/blk-cgroup.c
+++
From: z00354408
Signed-off-by: z00354408
---
block/blk-cgroup.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 8ba0af7..0dd9e76 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -1254,6 +1254,12 @@ int
Commit-ID: 3a09b8d45b3c05d49e581831de626927c37599f8
Gitweb: http://git.kernel.org/tip/3a09b8d45b3c05d49e581831de626927c37599f8
Author: Zhou Chengming <zhouchengmi...@huawei.com>
AuthorDate: Sun, 22 Jan 2017 15:22:35 +0800
Committer: Ingo Molnar <mi...@kernel.org>
CommitDate:
Commit-ID: 3a09b8d45b3c05d49e581831de626927c37599f8
Gitweb: http://git.kernel.org/tip/3a09b8d45b3c05d49e581831de626927c37599f8
Author: Zhou Chengming
AuthorDate: Sun, 22 Jan 2017 15:22:35 +0800
Committer: Ingo Molnar
CommitDate: Sun, 22 Jan 2017 10:34:17 +0100
sched/Documentation
A should be 5us, then the period and
runtime of group B should be 5us and 25000us.
Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com>
---
Documentation/scheduler/sched-rt-group.txt |8
1 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/Documen
A should be 5us, then the period and
runtime of group B should be 5us and 25000us.
Signed-off-by: Zhou Chengming
---
Documentation/scheduler/sched-rt-group.txt |8
1 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/Documentation/scheduler/sched-rt-group.txt
b
Commit-ID: 4e71de7986386d5fd3765458f27d612931f27f5e
Gitweb: http://git.kernel.org/tip/4e71de7986386d5fd3765458f27d612931f27f5e
Author: Zhou Chengming <zhouchengmi...@huawei.com>
AuthorDate: Mon, 16 Jan 2017 11:21:11 +0800
Committer: Thomas Gleixner <t...@linutronix.de>
Com
Commit-ID: 4e71de7986386d5fd3765458f27d612931f27f5e
Gitweb: http://git.kernel.org/tip/4e71de7986386d5fd3765458f27d612931f27f5e
Author: Zhou Chengming
AuthorDate: Mon, 16 Jan 2017 11:21:11 +0800
Committer: Thomas Gleixner
CommitDate: Tue, 17 Jan 2017 11:08:36 +0100
perf/x86/intel
tarted)
spin_lock // not executed
intel_stop_scheduling()
set state->sched_started = false
if (!state->sched_started)
spin_unlock// excuted
Signed-off-by: NuoHan Qiao <qiaonuo...@huawei.com>
S
tarted)
spin_lock // not executed
intel_stop_scheduling()
set state->sched_started = false
if (!state->sched_started)
spin_unlock// excuted
Signed-off-by: NuoHan Qiao
Signed-off-by: Zhou Chengming
-
tarted)
spin_lock // not executed
intel_stop_scheduling()
set state->sched_started = false
if (!state->sched_started)
spin_unlock// excuted
Signed-off-by: NuoHan Qiao <qiaonuo...@huawei.com>
S
tarted)
spin_lock // not executed
intel_stop_scheduling()
set state->sched_started = false
if (!state->sched_started)
spin_unlock// excuted
Signed-off-by: NuoHan Qiao
Signed-off-by: Zhou Chengming
-
tarted)
spin_lock // not executed
intel_stop_scheduling()
set state->sched_started = false
if (!state->sched_started)
spin_unlock// excuted
Signed-off-by: NuoHan Qiao <qiaonuo...@huawei.com>
Signed-off-
tarted)
spin_lock // not executed
intel_stop_scheduling()
set state->sched_started = false
if (!state->sched_started)
spin_unlock// excuted
Signed-off-by: NuoHan Qiao
Signed-off-by: Zhou Chengming
---
arc
s-security/2016/11/04/13
Reported-by: CAI Qian <caiq...@redhat.com>
Tested-by: Yang Shukui <yangshu...@huawei.com>
Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com>
---
fs/proc/proc_sysctl.c |3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/fs/
s-security/2016/11/04/13
Reported-by: CAI Qian
Tested-by: Yang Shukui
Signed-off-by: Zhou Chengming
---
fs/proc/proc_sysctl.c |3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
index 5d931bf..c4c90bd 100644
--- a/fs/proc/pro
Fixes CVE-2016-9191.
Reported-by: CAI Qian <caiq...@redhat.com>
Tested-by: Yang Shukui <yangshu...@huawei.com>
Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com>
---
fs/proc/proc_sysctl.c |3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/fs/proc/
Fixes CVE-2016-9191.
Reported-by: CAI Qian
Tested-by: Yang Shukui
Signed-off-by: Zhou Chengming
---
fs/proc/proc_sysctl.c |3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
index 5d931bf..c4c90bd 100644
--- a/fs/proc
Allow wakeup_dl tracer to be used by instances, like wakeup tracer
and wakeup_rt tracer.
Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com>
---
kernel/trace/trace_sched_wakeup.c |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/kernel/trace/trace_sched_wakeu
Allow wakeup_dl tracer to be used by instances, like wakeup tracer
and wakeup_rt tracer.
Signed-off-by: Zhou Chengming
---
kernel/trace/trace_sched_wakeup.c |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/kernel/trace/trace_sched_wakeup.c
b/kernel/trace
In !global_reclaim(sc) case, we should update sc->nr_reclaimed after each
shrink_slab in the loop. Because we need the correct sc->nr_reclaimed
value to see if we can break out.
Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com>
---
mm/vmscan.c |5 +
1 files changed,
In !global_reclaim(sc) case, we should update sc->nr_reclaimed after each
shrink_slab in the loop. Because we need the correct sc->nr_reclaimed
value to see if we can break out.
Signed-off-by: Zhou Chengming
---
mm/vmscan.c |5 +
1 files changed, 5 insertions(+), 0 deletions(-)
When CONFIG_SPARSEMEM_EXTREME is disabled, __section_nr can get
the section number with a subtraction directly.
Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com>
---
mm/sparse.c | 12 +++-
1 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/mm/sparse.c b/mm/sp
When CONFIG_SPARSEMEM_EXTREME is disabled, __section_nr can get
the section number with a subtraction directly.
Signed-off-by: Zhou Chengming
---
mm/sparse.c | 12 +++-
1 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/mm/sparse.c b/mm/sparse.c
index 5d0cf45..36d7bbb
t;nr_running to calculate the whole __sched_period value.
Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com>
---
kernel/sched/fair.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0fe30e6..59c9378 100644
--- a/kernel/
t;nr_running to calculate the whole __sched_period value.
Signed-off-by: Zhou Chengming
---
kernel/sched/fair.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0fe30e6..59c9378 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair
.count to become -1.
>From the suggestion of Andrea Arcangeli, unmerge_and_remove_all_rmap_items
has the same SMP race condition, so fix it too. My prev fix in function
scan_get_next_rmap_item will introduce a different SMP race condition,
so just invert the up_read/spin_unlock order as Andrea Arcange
.count to become -1.
>From the suggestion of Andrea Arcangeli, unmerge_and_remove_all_rmap_items
has the same SMP race condition, so fix it too. My prev fix in function
scan_get_next_rmap_item will introduce a different SMP race condition,
so just invert the up_read/spin_unlock order as Andrea Arcange
.count to become -1.
>From the suggestion of Andrea Arcangeli, unmerge_and_remove_all_rmap_items
has the same SMP race condition, so fix it too. My prev fix in function
scan_get_next_rmap_item will introduce a different SMP race condition,
so just invert the up_read/spin_unlock order as Andrea Arcange
.count to become -1.
>From the suggestion of Andrea Arcangeli, unmerge_and_remove_all_rmap_items
has the same SMP race condition, so fix it too. My prev fix in function
scan_get_next_rmap_item will introduce a different SMP race condition,
so just invert the up_read/spin_unlock order as Andrea Arcange
.count to become -1.
>From the suggestion of Andrea Arcangeli, unmerge_and_remove_all_rmap_items
has the same SMP race condition, so fix it too. My prev fix in function
scan_get_next_rmap_item will introduce a different SMP race condition,
so just invert the up_read/spin_unlock order as Andrea Arcange
.count to become -1.
>From the suggestion of Andrea Arcangeli, unmerge_and_remove_all_rmap_items
has the same SMP race condition, so fix it too. My prev fix in function
scan_get_next_rmap_item will introduce a different SMP race condition,
so just invert the up_read/spin_unlock order as Andrea Arcange
nt to become -1.
I changed the scan_get_next_rmap_item function refered to the khugepaged
scan function.
Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com>
---
mm/ksm.c |7 ++-
1 files changed, 2 insertions(+), 5 deletions(-)
diff --git a/mm/ksm.c b/mm/ksm.c
index 7ee101e..6e4324d 1006
nt to become -1.
I changed the scan_get_next_rmap_item function refered to the khugepaged
scan function.
Signed-off-by: Zhou Chengming
---
mm/ksm.c |7 ++-
1 files changed, 2 insertions(+), 5 deletions(-)
diff --git a/mm/ksm.c b/mm/ksm.c
index 7ee101e..6e4324d 100644
--- a/mm/ksm.c
+++ b/m
When enable KASLR, livepatch will adjust old_addr of changed
function accordingly. So do the same thing for reloc.
[PATCH v1] https://lkml.org/lkml/2015/11/4/91
Reported-by: Cyril B.
Signed-off-by: Zhou Chengming
---
kernel/livepatch/core.c |6 ++
1 files changed, 6 insertions(+), 0
When enable KASLR, livepatch will adjust old_addr of changed
function accordingly. So do the same thing for reloc.
[PATCH v1] https://lkml.org/lkml/2015/11/4/91
Reported-by: Cyril B. <c...@alwaysdata.com>
Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com>
---
kernel/live
When enable KASLR, func->old_addr will be set to zero
and livepatch will find the right old address.
But for reloc, livepatch just verify it using reloc->val
(old addr from user), so verify failed and report
"kernel mismatch" error.
Reported-by: Cyril B.
Signed-off-by
When enable KASLR, func->old_addr will be set to zero
and livepatch will find the right old address.
But for reloc, livepatch just verify it using reloc->val
(old addr from user), so verify failed and report
"kernel mismatch" error.
Reported-by: Cyril B. <c...@alwaysdata.com&g
86 matches
Mail list logo