The patch 125e564(Move Kconfig.instrumentation to arch/Kconfig and
init/Kconfig) had removed the Instrumentation Support menu,
and the configurations under this had be moved to General setup.
Update Documentation/kprobes.txt to reflect this change.
Signed-off-by: Li Bin huawei.li...@huawei.com
help this issue? I have no evidence that its lack is
responsible for the issue, but I think here need it indeed. Is that right?
SPIN_BUG_ON(ACCESS_ONCE(lock-owner) == current, recursion);
Thanks,
Li Bin
Causing that, _HOWEVER_ look at .owner_cpu and the reporting cpu!! How
can
On 2014/12/26 15:01, Sasha Levin wrote:
On 12/26/2014 01:45 AM, Li Bin wrote:
On 2014/7/8 4:05, Peter Zijlstra wrote:
On Mon, Jul 07, 2014 at 09:55:43AM -0400, Sasha Levin wrote:
I've also had this one, which looks similar:
[10375.005884] BUG: spinlock recursion on CPU#0, modprobe/10965
The execution flow redirection related implemention in the livepatch
ftrace handler is depended on the specific architecture. This patch
introduces klp_arch_set_pc(like kgdb_arch_set_pc) interface to change
the pt_regs.
Signed-off-by: Li Bin huawei.li...@huawei.com
---
arch/x86/include/asm
%s/ARCH_SUPPORT_FTARCE_OPS/ARCH_SUPPORTS_FTRACE_OPS/
Signed-off-by: Li Bin huawei.li...@huawei.com
---
kernel/trace/ftrace.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 929a733..9473b24 100644
--- a/kernel/trace
The execution flow redirection related implemention in the livepatch
ftrace handler is depended on the specific architecture. This patch
introduces klp_arch_set_pc(like kgdb_arch_set_pc) interface to change
the pt_regs.
Signed-off-by: Li Bin huawei.li...@huawei.com
---
arch/x86/include/asm
Sorry! Bad format, please ignore this patch.
On 2014/12/19 13:37, Li Bin wrote:
The execution flow redirection related implemention in the livepatch
ftrace handler is depended on the specific architecture. This patch
introduces klp_arch_set_pc(like kgdb_arch_set_pc) interface to change
On 2015/1/21 17:07, Li Bin wrote:
This reverts commit 83a90bb1345767f0cb96d242fd8b9db44b2b0e17.
The method that only allowing the topmost patch on the stack to be
enabled or disabled is unreasonable. Such as the following case:
- do live patch1
- disable patch1
- do
On 2015/1/21 22:08, Jiri Kosina wrote:
On Wed, 21 Jan 2015, Li Bin wrote:
for disable_patch:
The patch is unallowed to be disabled if one patch after has
dependencies with it and has been enabled.
for enable_patch:
The patch is unallowed to be enabled if one patch before has
dependencies
On 2015/1/21 22:36, Seth Jennings wrote:
On Wed, Jan 21, 2015 at 03:06:38PM +0100, Jiri Kosina wrote:
On Wed, 21 Jan 2015, Li Bin wrote:
This reverts commit 83a90bb1345767f0cb96d242fd8b9db44b2b0e17.
The method that only allowing the topmost patch on the stack to be
enabled or disabled
On 2015/1/22 11:51, Josh Poimboeuf wrote:
On Thu, Jan 22, 2015 at 08:42:29AM +0800, Li Bin wrote:
On 2015/1/21 22:08, Jiri Kosina wrote:
On Wed, 21 Jan 2015, Li Bin wrote:
By this you limit the definition of the patch inter-dependency to just
symbols. But that's not the only way how patches
On 2015/1/22 17:15, Miroslav Benes wrote:
On Thu, 22 Jan 2015, Li Bin wrote:
On 2015/1/21 17:07, Li Bin wrote:
This reverts commit 83a90bb1345767f0cb96d242fd8b9db44b2b0e17.
The method that only allowing the topmost patch on the stack to be
enabled or disabled is unreasonable
On 2015/1/22 16:39, Li Bin wrote:
On 2015/1/22 11:51, Josh Poimboeuf wrote:
On Thu, Jan 22, 2015 at 08:42:29AM +0800, Li Bin wrote:
On 2015/1/21 22:08, Jiri Kosina wrote:
On Wed, 21 Jan 2015, Li Bin wrote:
By this you limit the definition of the patch inter-dependency to just
symbols
On 2015/1/22 21:05, Josh Poimboeuf wrote:
On Thu, Jan 22, 2015 at 05:54:23PM +0800, Li Bin wrote:
On 2015/1/22 16:39, Li Bin wrote:
On 2015/1/22 11:51, Josh Poimboeuf wrote:
On Thu, Jan 22, 2015 at 08:42:29AM +0800, Li Bin wrote:
On 2015/1/21 22:08, Jiri Kosina wrote:
On Wed, 21 Jan 2015, Li
for disable_patch:
The patch is unallowed to be disabled if one patch after has
dependencies with it and has been enabled.
for enable_patch:
The patch is unallowed to be enabled if one patch before has
dependencies with it and has been disabled.
Signed-off-by: Li Bin huawei.li...@huawei.com
to be enabled if one patch before has
dependencies with it and has been disabled.
Li Bin (2):
livepatch: Revert livepatch: enforce patch stacking semantics
livepatch: disable/enable_patch manners for interdependent patches
kernel/livepatch/core.c | 66
be able to do new live patch unless disabing the
patch1 although there is no dependencies.
Signed-off-by: Li Bin huawei.li...@huawei.com
---
kernel/livepatch/core.c | 10 --
1 files changed, 0 insertions(+), 10 deletions(-)
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
The patch 125e564(Move Kconfig.instrumentation to arch/Kconfig and
init/Kconfig) had removed the Instrumentation Support menu,
and the configurations under this had be moved to General setup.
Update Documentation/kprobes.txt to reflect this change.
Signed-off-by: Li Bin huawei.li...@huawei.com
it, but by adding a helper function to
ftrace, we will be able to support livepatch on arch's which don't support
this option.
This is not correct for the case that the prologue of the old and new function
is different.
Thanks,
Li Bin
I submit this patchset as RFC since I'm not quite sure that I'm
On 2015/6/17 21:20, Miroslav Benes wrote:
On Wed, 17 Jun 2015, Li Bin wrote:
On 2015/6/17 16:13, Miroslav Benes wrote:
On Wed, 17 Jun 2015, Li Bin wrote:
The list of applied patches can be obtained just by 'ls
/sys/kernel/livepatch' and their state is in enabled attribute in each
State
---
1 klp_test1 enabled
2 klp_test2 enabled
3 klp_test3 disabled
---
Signed-off-by: Li Bin huawei.li
On 2015/6/17 16:13, Miroslav Benes wrote:
On Wed, 17 Jun 2015, Li Bin wrote:
The added sysfs interface /sys/kernel/livepatch/state is read-only,
it shows the patches that have been applied, incluing the stack index
and the state of each patch.
$ cat /sys/kernel/livepatch/state
Index
On 2015/5/29 15:14, Paul Bolle wrote:
On Thu, 2015-05-28 at 13:51 +0800, Li Bin wrote:
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
select HAVE_DYNAMIC_FTRACE
+select HAVE_DYNAMIC_FTRACE_WITH_REGS if HAVE_DYNAMIC_FTRACE
What's the point of if HAVE_DYNAMIC_FTRACE here
On 2015/5/26 15:32, Jiri Slaby wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 05/26/2015, 06:46 AM, Minfei Huang wrote:
On Tue, May 26, 2015 at 10:44 AM, Li Bin huawei.li...@huawei.com
wrote:
The klp_is_module return type should be boolean.
Signed-off-by: Li Bin huawei.li
On 2015/4/24 14:05, Masami Hiramatsu wrote:
(2015/04/24 12:24), Li Bin wrote:
On 2015/4/24 10:44, AKASHI Takahiro wrote:
This patchset enables livepatch support on arm64.
Livepatch was merged in v4.0, and allows replacying a function dynamically
based on ftrace framework, but it also
, x30
str w0, [x29,28]
mov x0, x1
bl _mcount
...
Thanks,
Li Bin
Thanks,
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
This patch add support for livepatch on arm64 based on the gcc -mfentry
feature and the ftrace DYNAMIC_FTRACE_WITH_REGS feature.
Signed-off-by: Li Bin huawei.li...@huawei.com
---
arch/arm64/Kconfig |3 ++
arch/arm64/include/asm/livepatch.h | 45
such as kernel live patching.
This patch adds DYNAMIC_FTRACE_WITH_REGS feature support for arm64
architecture.
Signed-off-by: Li Bin huawei.li...@huawei.com
---
arch/arm64/Kconfig |1 +
arch/arm64/include/asm/ftrace.h |4 ++
arch/arm64/kernel/entry-ftrace.S | 95
' and not '_mcount'
and is done before the function's stack frame is set up. So __fentry__
is responsibel to protect parameter registers and corruptible registers.
Signed-off-by: Li Bin huawei.li...@huawei.com
---
arch/arm64/Kconfig |1 +
arch/arm64/include/asm/ftrace.h |5 +++
arch
is register mov operation
have ralatively small impact on performance.
This patchset has been tested on arm64 platform.
Li Bin (4):
livepatch: ftrace: arm64: Add support for DYNAMIC_FTRACE_WITH_REGS
livepatch: ftrace: add ftrace_function_stub_ip function
livepatch: ftrace: arm64: Add support
.
EXAMPLES:
...
stub_ip = ftrace_function_stub_ip(func_addr);
ftrace_set_filter_ip(ftrace_ops, stub_ip, 0, 0);
register_ftrace_function(ftrace_ops);
...
Signed-off-by: Li Bin huawei.li...@huawei.com
---
include/linux/ftrace.h |1 +
kernel/trace/ftrace.c | 32
2
From: Xie XiuQi xiexi...@huawei.com
This patch implement klp_write_module_reloc on arm64 platform.
Signed-off-by: Xie XiuQi xiexi...@huawei.com
Signed-off-by: Li Bin huawei.li...@huawei.com
---
arch/arm64/kernel/livepatch.c |7 +-
arch/arm64/kernel/module.c| 355
On 2015/6/2 10:15, AKASHI Takahiro wrote:
On 05/30/2015 09:01 AM, Masami Hiramatsu wrote:
On 2015/05/28 14:51, Li Bin wrote:
This patchset propose a method for gcc -mfentry feature(profile
before prologue) implementation for arm64, and propose the livepatch
implementation for arm64 based
The klp_is_module return type should be boolean.
Signed-off-by: Li Bin huawei.li...@huawei.com
---
kernel/livepatch/core.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 284e269..30e9339 100644
--- a/kernel
The klp_is_module return type should be boolean.
Signed-off-by: Li Bin huawei.li...@huawei.com
---
kernel/livepatch/core.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 284e269..30e9339 100644
--- a/kernel
. And
it will bring performance overhead, such as do_mem_abort (in
.exception.text section). This patch make the call mcount to
nop for this case in recordmcount.
Signed-off-by: Li Bin <huawei.li...@huawei.com>
---
scripts/recordmcount.c | 23 ++-
1 file changed, 22 insertions
-by: Li Bin <huawei.li...@huawei.com>
---
scripts/recordmcount.c |2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/scripts/recordmcount.c b/scripts/recordmcount.c
index 3d1984e..8cc020b 100644
--- a/scripts/recordmcount.c
+++ b/scripts/recordmcount.c
@@ -345,6
. And
it will bring performance overhead, such as do_mem_abort (in
.exception.text section). This patch make the call mcount to
nop for this case in recordmcount.
Cc: <sta...@vger.kernel.org> # 3.18+
Signed-off-by: Li Bin <huawei.li...@huawei.com>
---
scripts/recordmcount.c | 24 +++
-by: Li Bin <huawei.li...@huawei.com>
---
scripts/recordmcount.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/scripts/recordmcount.c b/scripts/recordmcount.c
index 3d1984e..8cc020b 100644
--- a/scripts/recordmcount.c
+++ b/scripts/recordmcount.c
@@ -345,6 +345,7 @@ do_file(char
Li Bin (3):
recordmcount: fix endianness handling bug for nop_mcount
recordmcount: x86: assign a meaningful value to rel_type_nop
recordmcount: arm64: replace the ignored mcount call into nop
scripts/recordmcount.c | 26 +-
scripts/recordmcount.h | 5 +++--
2 files
In nop_mcount, shdr->sh_offset and welp->r_offset should handle
endianness properly, otherwise it will trigger Segmentation fault
if the recordmcount main and file.o have different endianness.
Cc: <sta...@vger.kernel.org> # 3.0+
Signed-off-by: Li Bin <huawei.li...@huawei.co
)
[ 198.591092] ---[ end trace 6a346f8f20949ac8 ]---
This patch fix it, and dump the real return address in the call trace.
Signed-off-by: Li Bin <huawei.li...@huawei.com>
---
arch/arm64/kernel/traps.c | 31 +++
1 file changed, 31 insertions(+)
diff --git a/arch/arm64/
949ac8 ]---
This is because when using function graph tracer, if the traced
function return value is in multi regs ([0x-07]), return_to_handler
may corrupt them. So in return_to_handler, the parameter regs should
be protected properly.
Cc: <sta...@vger.kernel.org> # 3.18+
Signed-off-by: Li Bin <huawei.l
rnel.crashing.org>
Cc: Paul Mackerras <pau...@samba.org>
Cc: Michael Ellerman <m...@ellerman.id.au>
Cc: linuxppc-...@lists.ozlabs.org
Cc: linux...@vger.kernel.org
Cc: Thomas Gleixner <t...@linutronix.de>
"Cc: H. Peter Anvin" <h...@zytor.com>
Cc: x..
: Tony Luck <tony.l...@intel.com>
Cc: Fenghua Yu <fenghua...@intel.com>
Cc: linux-i...@vger.kernel.org
Suggested-by: Steven Rostedt <rost...@goodmis.org>
Signed-off-by: Li Bin <huawei.li...@huawei.com>
---
arch/ia64/kernel/ftrace.c | 13 ++---
1 files changed, 6 in
: James Hogan <james.ho...@imgtec.com>
Cc: linux-me...@vger.kernel.org
Suggested-by: Steven Rostedt <rost...@goodmis.org>
Signed-off-by: Li Bin <huawei.li...@huawei.com>
---
arch/metag/kernel/ftrace.c | 12 ++--
1 files changed, 6 insertions(+), 6 deletions(-)
diff --git a
: Thomas Gleixner <t...@linutronix.de>
Cc: "H. Peter Anvin" <h...@zytor.com>
Cc: x...@kernel.org
Suggested-by: Steven Rostedt <rost...@goodmis.org>
Signed-off-by: Li Bin <huawei.li...@huawei.com>
---
arch/x86/kernel/ftrace.c | 13 ++---
1 files changed, 6 i
v2:
Based on the comments from Will and Steve,
1. Modify the commit message
2. Fix the misleading comments for ftrace_modify_code
v3:
Modify the comments again based on the comment from Steve.
Link: https://lkml.org/lkml/2015/12/3/422
Li Bin (2):
arm64: ftrace: stop using kstop_machine
tions", that can be executed by one
thread of execution as they are being modified by another thread
of execution without requiring explicit synchronization.
Signed-off-by: Li Bin <huawei.li...@huawei.com>
Reviewed-by: Steven Rostedt <rost...@goodmis.org>
---
arch/arm64/kernel/ftrace.c
I will also update the comment for the other arch that using the similar
description, such as ia64/metag/powerpc/sh/x86.
Thanks,
Li Bin
on 2015/12/4 10:50, Steven Rostedt wrote:
> On Fri, 4 Dec 2015 10:18:39 +0800
> Li Bin <huawei.li...@huawei.com> wrote:
>
>> There is n
: Benjamin Herrenschmidt <b...@kernel.crashing.org>
Cc: Paul Mackerras <pau...@samba.org>
Cc: Michael Ellerman <m...@ellerman.id.au>
Cc: linuxppc-...@lists.ozlabs.org
Suggested-by: Steven Rostedt <rost...@goodmis.org>
Signed-off-by: Li Bin <huawei.li...@huawei.com>
---
: linux...@vger.kernel.org
Suggested-by: Steven Rostedt <rost...@goodmis.org>
Signed-off-by: Li Bin <huawei.li...@huawei.com>
---
arch/sh/kernel/ftrace.c | 13 ++---
1 files changed, 6 insertions(+), 7 deletions(-)
diff --git a/arch/sh/kernel/ftrace.c b/arch/sh/kernel/ftrace.c
i
on 2015/12/4 10:50, Steven Rostedt wrote:
> On Fri, 4 Dec 2015 10:18:39 +0800
> Li Bin <huawei.li...@huawei.com> wrote:
>
>> There is no need to worry about module text disappearing case,
>> because that ftrace has a module notifier that is called when
>> a mo
.
Suggested-by: Steven Rostedt <rost...@goodmis.org>
Signed-off-by: Li Bin <huawei.li...@huawei.com>
---
arch/arm64/kernel/ftrace.c | 11 +--
1 files changed, 5 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c
index 9669b33..8f
: Benjamin Herrenschmidt <b...@kernel.crashing.org>
Cc: Paul Mackerras <pau...@samba.org>
Cc: Michael Ellerman <m...@ellerman.id.au>
Cc: linuxppc-...@lists.ozlabs.org
Suggested-by: Steven Rostedt <rost...@goodmis.org>
Signed-off-by: Li Bin <huawei.li...@huawei.com>
---
on 2015/12/6 6:52, Steven Rostedt wrote:
> On Sat, 5 Dec 2015 18:12:57 +0100 (CET)
> Thomas Gleixner <t...@linutronix.de> wrote:
>
>> On Fri, 4 Dec 2015, Li Bin wrote:
>>> --- a/arch/x86/kernel/ftrace.c
>>> +++ b/arch/x86/kernel/ftrace.c
>>>
: linux...@vger.kernel.org
Suggested-by: Steven Rostedt <rost...@goodmis.org>
Signed-off-by: Li Bin <huawei.li...@huawei.com>
---
arch/sh/kernel/ftrace.c | 12 +---
1 files changed, 5 insertions(+), 7 deletions(-)
diff --git a/arch/sh/kernel/ftrace.c b/arch/sh/kernel/ftrace.c
i
: James Hogan <james.ho...@imgtec.com>
Cc: linux-me...@vger.kernel.org
Suggested-by: Steven Rostedt <rost...@goodmis.org>
Signed-off-by: Li Bin <huawei.li...@huawei.com>
---
arch/metag/kernel/ftrace.c | 11 +--
1 files changed, 5 insertions(+), 6 deletions(-)
diff --git a
: Tony Luck <tony.l...@intel.com>
Cc: Fenghua Yu <fenghua...@intel.com>
Cc: linux-i...@vger.kernel.org
Suggested-by: Steven Rostedt <rost...@goodmis.org>
Signed-off-by: Li Bin <huawei.li...@huawei.com>
---
arch/ia64/kernel/ftrace.c | 12 +---
1 files changed, 5 in
rnel.crashing.org>
Cc: Paul Mackerras <pau...@samba.org>
Cc: Michael Ellerman <m...@ellerman.id.au>
Cc: linuxppc-...@lists.ozlabs.org
Cc: linux...@vger.kernel.org
Cc: Thomas Gleixner <t...@linutronix.de>
"Cc: H. Peter Anvin" <h...@zytor.com>
Cc: x..
: Thomas Gleixner <t...@linutronix.de>
Cc: "H. Peter Anvin" <h...@zytor.com>
Cc: x...@kernel.org
Suggested-by: Steven Rostedt <rost...@goodmis.org>
Signed-off-by: Li Bin <huawei.li...@huawei.com>
---
arch/x86/kernel/ftrace.c | 12 +---
1 files changed, 5 i
tions", that can be executed by one
thread of execution as they are being modified by another thread
of execution without requiring explicit synchronization.
Signed-off-by: Li Bin <huawei.li...@huawei.com>
---
arch/arm64/kernel/ftrace.c |5 +
1 files changed, 5 insertions(+), 0 deletion
, such that it will no longer do any
modifications to that module's text.
The update to make functions be traced or not is done under the
ftrace_lock mutex as well.
Signed-off-by: Li Bin <huawei.li...@huawei.com>
---
arch/arm64/kernel/ftrace.c |5 +
1 files changed, 1 insertions(+), 4 deletions(-)
diff
v2:
Based on the comments from Will and Steve,
1. Modify the commit message
2. Fix the misleading comments for ftrace_modify_code
Link: https://lkml.org/lkml/2015/12/3/422
Li Bin (2):
arm64: ftrace: stop using kstop_machine to enable/disable tracing
arm64: ftrace: fix the comments
: <sta...@vger.kernel.org> # 3.18+
Signed-off-by: Li Bin <huawei.li...@huawei.com>
---
arch/arm64/kernel/ftrace.c |5 +
1 files changed, 5 insertions(+), 0 deletions(-)
diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c
index c851be7..9669b33 100644
--- a/arch/
s/ARCH_SUPPORT_FTARCE_OPS/ARCH_SUPPORTS_FTRACE_OPS
Signed-off-by: Li Bin <huawei.li...@huawei.com>
---
kernel/trace/ftrace.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 3f743b1..0033e05 100644
--- a/kernel
();
| |-mutex_unlock(_mutex);
|-[process the patch's state]|
|-mutex_unlock(_mutex) |
Fix this race condition by adding klp_is_patch_registered() check in
enabled_store() after get the lock klp_mutex.
Signed-off-by: Li Bin <huawei.li...@huawei.com>
---
s/ARCH_SUPPORT_FTARCE_OPS/ARCH_SUPPORTS_FTARCE_OPS
Signed-off-by: Li Bin <huawei.li...@huawei.com>
---
kernel/trace/ftrace.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 3f743b1..eb4a881 100644
--- a/kernel
The file cgroup-debug.c had been removed from commit fe6934354f8e
(cgroups: move the cgroup debug subsys into cgroup.c to access internal state).
Remain the CFLAGS_REMOVE_cgroup-debug.o = $(CC_FLAGS_FTRACE)
useless in kernel/Makefile.
Signed-off-by: Li Bin <huawei.li...@huawei.com>
---
Hi Pratyush,
on 2016/4/4 13:17, Pratyush Anand wrote:
> Hi Li,
>
> On 31/03/2016:08:45:05 PM, Li Bin wrote:
>> Hi Pratyush,
>>
>> on 2016/3/21 18:24, Pratyush Anand wrote:
>>> On 21/03/2016:08:37:50 AM, He Kuang wrote:
>>>> On arm64, watchpoin
on 2016/4/8 13:14, Pratyush Anand wrote:
> Hi Li,
>
> On 07/04/2016:07:34:37 PM, Li Bin wrote:
>> Hi Pratyush,
>>
>> on 2016/4/4 13:17, Pratyush Anand wrote:
>>> Hi Li,
>>>
>>> On 31/03/2016:08:45:05 PM, Li Bin wrote:
>>>&g
>
> ~Pratyush
>
> [1]
> https://github.com/pratyushanand/linux/commit/7623c8099ac22eaa00e7e0f52430f7a4bd154652
This patch did not consider that, when excetpion return, the singlestep flag
should be restored, otherwise the right singlestep will not triggered.
Right?
Thanks,
Li Bin
ruction_pointer_set(regs, (long)jp->entry);
preempt_disable();
+ pause_graph_tracing();
return 1;
}
@@ -757,6 +758,7 @@ int __kprobes longjmp_break_handler(struct kprobe *p,
struct pt_regs *regs)
show_regs(regs);
BUG();
Hi David,
on 2016/3/9 13:32, David Long wrote:
> +int __kprobes arch_prepare_kprobe(struct kprobe *p)
> +{
> + unsigned long probe_addr = (unsigned long)p->addr;
Here should verify the addr alignment:
if (probe_addr & 0x3)
return -EINVAL;
, in exception
handler, because the break_handler has been set NULL, it will not
setup_singlestep, and will return to the original instrucion...
To fix this bug, __unregister_kprobe_top call the synchronize_sched()
before clearing the handler.
Signed-off-by: Li Bin <huawei.li...@huawei.
right shifted. So,
> + switch (offset) {
> + case0 ... 30:
> + val = regs->regs[offset];
> + break;
> + case offsetof(struct pt_regs, sp):
here should be shifted too, as
case offsetof(struct pt_regs, sp) >> 3:
> +
arch64 based on
aarch64 mfentry feature. When the community has a clear plan, we are happy
to make adaptation and contribute our related work to the community, including
the kpatch-build support :-)
[1] livepatch: add support on arm64
https://lkml.org/lkml/2015/5/28/54
[2] [AArch64] support -mfentry feature for arm64
https://gcc.gnu.org/ml/gcc-patches/2016-03/msg00756.html
[3] Kernel livepatching support in GCC
https://gcc.gnu.org/ml/gcc/2015-05/msg00267.html
[4] arm64: ftrace with regs for livepatch support
http://lists.infradead.org/pipermail/linux-arm-kernel/2016-January/401352.html
Thanks,
Li Bin
>
> Jessica
>
> .
>
adrpx16, 1 <__FRAME_END__+0xf8a8>
594: f944c611ldr x17, [x16,#2440]
598: 91262210add x16, x16, #0x988
59c: d61f0220br x17
NOTES:
In addition to ARM and AARCH64, other architectures, such as
s390/alpha/mips/parisc/poperpc/sh/sparc/xte
e included.
Signed-off-by: Li Bin <huawei.li...@huawei.com>
---
tools/perf/util/probe-event.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
index a2670e9..bf7c928 100644
--- a/tools/perf/util/probe-event.c
+++ b
WQ_WORKER))
>> return kthread_data(current);
>> return NULL;
>> }
>
> Yeah, that makes sense to me. Can you please resend the patch with
> patch description and SOB?
Ok, I will resend the patch soon.
Thanks,
Li Bin
>
> Thanks.
>
the 'current' to check the
condition.
Reported-by: Xiaofei Tan <tanxiao...@huawei.com>
Signed-off-by: Li Bin <huawei.li...@huawei.com>
---
kernel/workqueue_internal.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/workqueue_internal.h b/kernel/workqueue_
Hi, Jiangshan
on 2017/10/26 23:55, Lai Jiangshan wrote:
> On Tue, Oct 24, 2017 at 9:18 AM, Li Bin <huawei.li...@huawei.com> wrote:
>
> I remember that softirq can be invoked when irq_eixt(),
> and in this case the current->current_pwq is also NULL
> if __queue_work() i
r *current_wq_worker(void)
{
- if (current->flags & PF_WQ_WORKER)
+ if (!in_irq() && (current->flags & PF_WQ_WORKER))
return kthread_data(current);
return NULL;
}
Thanks,
Li Bin
> Thanks.
>
we shouldn't use the 'current' to check the
condition.
Reported-by: Xiaofei Tan <tanxiao...@huawei.com>
Signed-off-by: Li Bin <huawei.li...@huawei.com>
---
kernel/workqueue_internal.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/workqueue_internal.h b/ker
n the rq1, even though we hold the rq1->lock. This patch will
repick the first pushable task to be sure the task is still on the rq.
Signed-off-by: Li Bin <huawei.li...@huawei.com>
---
kernel/sched/deadline.c | 48 +++-
1 file changed, 23
q0
So we can't rely on these checks of task_A to make sure the task_A is
still on the rq1, even though we hold the rq1->lock. This patch will
repick the first pushable task to be sure the task is still on the rq.
Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com>
Signed-o
Li Bin (1):
sched/deadline.c: pick and check task if double_lock_balance() unlock
the rq
Zhou Chengming (1):
sched/rt.c: pick and check task if double_lock_balance() unlock the
rq
kernel/sched/deadline.c | 48 +++-
kernel/sched/rt.c
n the rq1, even though we hold the rq1->lock. This patch will
repick the first pushable task to be sure the task is still on the rq.
Signed-off-by: Li Bin <huawei.li...@huawei.com>
Acked-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Reviewed-by: Steven Rostedt (VMware) <rost.
q0
So we can't rely on these checks of task_A to make sure the task_A is
still on the rq1, even though we hold the rq1->lock. This patch will
repick the first pushable task to be sure the task is still on the rq.
Signed-off-by: Zhou Chengming <zhouchengmi...@huawei.com>
Signed-o
Changes in v2:
* Only change the comment and coding style as suggested by Steve
Li Bin (1):
sched/deadline.c: pick and check task if double_lock_balance() unlock
the rq
Zhou Chengming (1):
sched/rt.c: pick and check task if double_lock_balance() unlock the
rq
kernel/sched
Signed-off-by: Li Bin <huawei.li...@huawei.com>
---
kernel/sched/topology.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 64cc564..cf15c1c 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -
The member auxv in prctl_mm_map structure which be shared with
userspace is pointer type, but the kernel supporting COMPAT didn't
handle it. This patch fix the compat handling for prctl syscall.
Signed-off-by: Li Bin <huawei.li...@huawei.com>
---
kernel/sys.
The member auxv in prctl_mm_map structure which be shared with
userspace is pointer type, but the kernel supporting COMPAT didn't
handle it. This patch fix the compat handling for prctl syscall.
Signed-off-by: Li Bin <huawei.li...@huawei.com>
---
kernel/sys.
Hi, Jiangshan
on 2017/10/26 23:55, Lai Jiangshan wrote:
> On Tue, Oct 24, 2017 at 9:18 AM, Li Bin wrote:
>
> I remember that softirq can be invoked when irq_eixt(),
> and in this case the current->current_pwq is also NULL
> if __queue_work() is called in the soft irq.
>
we shouldn't use the 'current' to check the
condition.
Reported-by: Xiaofei Tan
Signed-off-by: Li Bin
---
kernel/workqueue_internal.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/workqueue_internal.h b/kernel/workqueue_internal.h
index 8635417..29fa81f 100644
e included.
Signed-off-by: Li Bin
---
tools/perf/util/probe-event.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
index a2670e9..bf7c928 100644
--- a/tools/perf/util/probe-event.c
+++ b/tools/perf/util/probe-event.c
Li Bin (1):
sched/deadline.c: pick and check task if double_lock_balance() unlock
the rq
Zhou Chengming (1):
sched/rt.c: pick and check task if double_lock_balance() unlock the
rq
kernel/sched/deadline.c | 48 +++-
kernel/sched/rt.c
n the rq1, even though we hold the rq1->lock. This patch will
repick the first pushable task to be sure the task is still on the rq.
Signed-off-by: Li Bin
---
kernel/sched/deadline.c | 48 +++-
1 file changed, 23 insertions(+), 25 deletions(-)
se checks of task_A to make sure the task_A is
still on the rq1, even though we hold the rq1->lock. This patch will
repick the first pushable task to be sure the task is still on the rq.
Signed-off-by: Zhou Chengming
Signed-off-by: Li Bin
---
kernel/sch
Changes in v2:
* Only change the comment and coding style as suggested by Steve
Li Bin (1):
sched/deadline.c: pick and check task if double_lock_balance() unlock
the rq
Zhou Chengming (1):
sched/rt.c: pick and check task if double_lock_balance() unlock the
rq
kernel/sched
1 - 100 of 207 matches
Mail list logo