the warning and the instruction write.
>
> Signed-off-by: Josh Poimboeuf
Reviewed-by: Miroslav Benes
M
On Tue, 24 Jan 2023, Josh Poimboeuf wrote:
> restore_r2() returns 1 on success, which is surprising for a non-boolean
> function. Change it to return 0 on success and -errno on error to match
> kernel coding convention.
>
> Signed-off-by: Josh Poimboeuf
Reviewed-by: Miroslav Benes
M
> > Petr has commented on the code aspects. I will just add that s390x was not
> > dealt with at the time because there was no live patching support for
> > s390x back then if I remember correctly and my notes do not lie. The same
> > applies to powerpc32. I think that both should be fixed as well
Hi,
first thank you for taking over and I also appologize for not replying
much sooner.
On Thu, 1 Sep 2022, Song Liu wrote:
> From: Miroslav Benes
>
> Josh reported a bug:
>
> When the object to be patched is a module, and that module is
> rmmod'ed and reloaded
> > > --- a/kernel/livepatch/core.c
> > > +++ b/kernel/livepatch/core.c
> > > @@ -316,6 +316,45 @@ int klp_apply_section_relocs(struct module *pmod,
> > > Elf_Shdr *sechdrs,
> > > return apply_relocate_add(sechdrs, strtab, symndx, secndx, pmod);
> > > }
> > >
> > > +static void
only thing remaining in asm/livepatch.h
> on x86 and s390, remove asm/livepatch.h
>
> livepatch.h remains on powerpc but its content is exclusively used
> by powerpc specific code.
>
> Signed-off-by: Christophe Leroy
Acked-by: Miroslav Benes
M
> +#define sym_for_each_insn(file, sym, insn) \
> + for (insn = find_insn(file, sym->sec, sym->offset); \
> + insn && >list != >insn_list && \
> + insn->sec == sym->sec &&\
> +
On Thu, 27 Jan 2022, Christophe Leroy wrote:
> This series allow architectures to request having modules data in
> vmalloc area instead of module area.
>
> This is required on powerpc book3s/32 in order to set data non
> executable, because it is not possible to set executability on page
>
> @@ -195,6 +208,9 @@ static void mod_tree_remove(struct module *mod)
> {
> __mod_tree_remove(>core_layout.mtn, _tree);
> mod_tree_remove_init(mod);
> +#ifdef CONFIG_ARCH_WANTS_MODULES_DATA_IN_VMALLOC
> + __mod_tree_remove(>core_layout.mtn, _data_tree);
s/core_layout/data_layout/
On Mon, 20 Dec 2021, Christophe Leroy wrote:
> Implement CONFIG_DYNAMIC_FTRACE_WITH_ARGS. It accelerates the call
> of livepatching.
>
> Also note that powerpc being the last one to convert to
> CONFIG_DYNAMIC_FTRACE_WITH_ARGS, it will now be possible to remove
> klp_arch_set_pc() on all
On Mon, 20 Dec 2021, Christophe Leroy wrote:
> PPC64 needs some special logic to properly set up the TOC.
> See commit 85baa095497f ("powerpc/livepatch: Add live patching support
> on ppc64le") for details.
>
> PPC32 doesn't have TOC so it doesn't need that logic, so adding
> LIVEPATCH support
ight types instead of forcing 64 bits types.
>
> Fixes: 7c8e2bdd5f0d ("livepatch: Apply vmlinux-specific KLP relocations
> early")
> Signed-off-by: Christophe Leroy
> Acked-by: Petr Mladek
Acked-by: Miroslav Benes
M
Hi,
On Thu, 28 Oct 2021, Christophe Leroy wrote:
> This series implements livepatch on PPC32.
>
> This is largely copied from what's done on PPC64.
>
> Christophe Leroy (5):
> livepatch: Fix build failure on 32 bits processors
> powerpc/ftrace: No need to read LR from stack in _mcount()
>
Hi,
> diff --git a/include/linux/trace_recursion.h b/include/linux/trace_recursion.h
> index abe1a50..2bc1522 100644
> --- a/include/linux/trace_recursion.h
> +++ b/include/linux/trace_recursion.h
> @@ -135,6 +135,9 @@ static __always_inline int trace_get_context_bit(void)
> # define
node);
> @@ -120,7 +122,6 @@ static void notrace klp_ftrace_handler(unsigned long ip,
> klp_arch_set_pc(fregs, (unsigned long)func->new_func);
>
> unlock:
> - preempt_enable_notrace();
> ftrace_test_recursion_unlock(bit);
> }
Acked-by: Miroslav Benes
for the livepatch par
> > Side note... the comment will eventually conflict with peterz's
> > https://lore.kernel.org/all/20210929152429.125997...@infradead.org/.
>
> Steven, would you like to share your opinion on this patch?
>
> If klp_synchronize_transition() will be removed anyway, the comments
> will be
> diff --git a/include/linux/trace_recursion.h b/include/linux/trace_recursion.h
> index a9f9c57..101e1fb 100644
> --- a/include/linux/trace_recursion.h
> +++ b/include/linux/trace_recursion.h
> @@ -208,13 +208,29 @@ static __always_inline void trace_clear_recursion(int
> bit)
> * Use this for
> diff --git a/include/linux/trace_recursion.h b/include/linux/trace_recursion.h
> index a9f9c57..805f9c4 100644
> --- a/include/linux/trace_recursion.h
> +++ b/include/linux/trace_recursion.h
> @@ -214,7 +214,14 @@ static __always_inline void trace_clear_recursion(int
> bit)
> static
mbol_args to find_symbol
>
> Simplify the calling convention by passing the find_symbol_args structure
> to find_symbol instead of initializing it inside the function.
>
> Signed-off-by: Christoph Hellwig
Reviewed-by: Miroslav Benes
M
> void *__symbol_get(const char *symbol)
> {
> - struct module *owner;
> - const struct kernel_symbol *sym;
> + struct find_symbol_arg fsa = {
> + .name = symbol,
> + .gplok = true,
> + .warn = true,
> + };
>
> preempt_disable();
>
On Tue, 2 Feb 2021, Christoph Hellwig wrote:
> EXPORT_UNUSED_SYMBOL* is not actually used anywhere. Remove the
> unused functionality as we generally just remove unused code anyway.
>
> Signed-off-by: Christoph Hellwig
Reviewed-by: Miroslav Benes
M
On Tue, 2 Feb 2021, Christoph Hellwig wrote:
> As far as I can tell this has never been used at all, and certainly
> not any time recently.
Right, I've always wondered about this one.
> Signed-off-by: Christoph Hellwig
Reviewed-by: Miroslav Benes
M
On Tue, 2 Feb 2021, Christoph Hellwig wrote:
> struct symsearch is only used inside of module.h, so move the definition
> out of module.h.
>
> Signed-off-by: Christoph Hellwig
Reviewed-by: Miroslav Benes
M
On Tue, 2 Feb 2021, Christoph Hellwig wrote:
> each_symbol_section is only called by find_symbol, so merge the two
> functions.
>
> Signed-off-by: Christoph Hellwig
Reviewed-by: Miroslav Benes
M
On Tue, 2 Feb 2021, Christoph Hellwig wrote:
> each_symbol_in_section just contains a trivial loop over its arguments.
> Just open code the loop in the two callers.
>
> Signed-off-by: Christoph Hellwig
Reviewed-by: Miroslav Benes
M
-off-by: Christoph Hellwig
Reviewed-by: Miroslav Benes
M
On Tue, 2 Feb 2021, Christoph Hellwig wrote:
> kallsyms_on_each_symbol and module_kallsyms_on_each_symbol are only used
> by the livepatching code, so don't build them if livepatching is not
> enabled.
>
> Signed-off-by: Christoph Hellwig
Reviewed-by: Miroslav Benes
M
On Mon, 1 Feb 2021, Christoph Hellwig wrote:
> On Mon, Feb 01, 2021 at 02:37:12PM +0100, Miroslav Benes wrote:
> > > > This change is not needed. (objname == NULL) means that we are
> > > > interested only in symbols in "vmlinux".
> > > >
> &g
One more thing...
> @@ -4379,8 +4379,7 @@ int module_kallsyms_on_each_symbol(int (*fn)(void *,
> const char *,
> unsigned int i;
> int ret;
>
> - module_assert_mutex();
> -
> + mutex_lock(_mutex);
> list_for_each_entry(mod, , list) {
> /* We hold
On Mon, 1 Feb 2021, Christoph Hellwig wrote:
> On Fri, Jan 29, 2021 at 10:43:36AM +0100, Petr Mladek wrote:
> > > --- a/kernel/livepatch/core.c
> > > +++ b/kernel/livepatch/core.c
> > > @@ -164,12 +164,8 @@ static int klp_find_object_symbol(const char
> > > *objname, const char *name,
> > >
On Mon, 1 Feb 2021, Jessica Yu wrote:
> +++ Miroslav Benes [29/01/21 16:29 +0100]:
> >On Thu, 28 Jan 2021, Christoph Hellwig wrote:
> >
> >> Allow for a RCU-sched critical section around find_module, following
> >> the lower level find_module_all helper, and sw
On Thu, 28 Jan 2021, Christoph Hellwig wrote:
> Allow for a RCU-sched critical section around find_module, following
> the lower level find_module_all helper, and switch the two callers
> outside of module.c to use such a RCU-sched critical section instead
> of module_mutex.
That's a nice idea.
On Thu, 28 Jan 2021, Christoph Hellwig wrote:
> find_module is not used by modular code any more, and random driver code
> has no business calling it to start with.
>
> Signed-off-by: Christoph Hellwig
Reviewed-by: Miroslav Benes
M
On Tue, 12 Dec 2017, Torsten Duwe wrote:
> Hi all,
>
> The "Power Architecture 64-Bit ELF V2 ABI" says in section 2.3.2.3:
>
> [...] There are several rules that must be adhered to in order to ensure
> reliable and consistent call chain backtracing:
>
> * Before a function calls any other
ake signal is not automatic. It is done only when
admin requests it by writing 1 to signal sysfs attribute in livepatch
sysfs directory.
Signed-off-by: Miroslav Benes <mbe...@suse.cz>
Cc: Oleg Nesterov <o...@redhat.com>
Cc: Michael Ellerman <m...@ellerman.id.au>
Cc: Thomas Glei
On Thu, 2 Nov 2017, Josh Poimboeuf wrote:
> On Tue, Oct 31, 2017 at 12:48:52PM +0100, Miroslav Benes wrote:
> > diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
> > index bf8c8fd72589..b7c60662baf3 100644
> > --- a/kernel/livepatch/core.c
> > ++
On Thu, 2 Nov 2017, Josh Poimboeuf wrote:
> On Tue, Oct 31, 2017 at 12:48:52PM +0100, Miroslav Benes wrote:
> > +
> > +/*
> > + * Sends a fake signal to all non-kthread tasks with TIF_PATCH_PENDING set.
> > + * Kthreads with TIF_PATCH_PENDING set are woken up.
On Wed, 1 Nov 2017, Oleg Nesterov wrote:
> On 11/01, Petr Mladek wrote:
> >
> > On Tue 2017-10-31 12:48:52, Miroslav Benes wrote:
> > > + if (task->flags & PF_KTHREAD) {
> > > + /*
> > > + * Wa
> +/*
> + * Sends a fake signal to all non-kthread tasks with TIF_PATCH_PENDING set.
> + * Kthreads with TIF_PATCH_PENDING set are woken up. Only admin can request
> this
> + * action currently.
> + */
> +void klp_force_signals(void)
> +{
> + struct task_struct *g, *task;
> +
> +
ake signal is not automatic. It is done only when
admin requests it by writing 1 to signal sysfs attribute in livepatch
sysfs directory.
Signed-off-by: Miroslav Benes <mbe...@suse.cz>
Cc: Oleg Nesterov <o...@redhat.com>
Cc: Michael Ellerman <m...@ellerman.id.au>
Cc: Thomas Glei
On Fri, 11 Aug 2017, Josh Poimboeuf wrote:
> On Thu, Aug 10, 2017 at 12:48:14PM +0200, Miroslav Benes wrote:
> > Last, sending the fake signal is not automatic. It is done only when
> > admin requests it by writing 1 to force sysfs attribute in livepatch
> > sysfs direc
g the fake signal is not automatic. It is done only when
admin requests it by writing 1 to force sysfs attribute in livepatch
sysfs directory.
Signed-off-by: Miroslav Benes <mbe...@suse.cz>
Cc: Oleg Nesterov <o...@redhat.com>
Cc: Michael Ellerman <m...@ellerman.id.au>
Cc: Thomas
ay and prevent these
> races by design. But it made the patch definition more complicated
> and opened another can of worms. See
> https://lkml.kernel.org/r/1464018848-4303-1-git-send-email-pmla...@suse.com
>
> [Thanks to Petr Mladek for improving the commit message.]
>
> Signed-off-by: Mi
e /sys/kernel/livepatch//enabled file while
> the transition is in progress. Then all the tasks will attempt to
> converge back to the original patch state.
>
> [1] https://lkml.kernel.org/r/20141107140458.ga21...@suse.cz
>
> Signed-off-by: Josh Poimboeuf <jpoim...@redhat.com>
I looked at the patch again and could not see any problem with it. I
tested it with a couple of live patches too and it worked as expected.
Good job.
Acked-by: Miroslav Benes <mbe...@suse.cz>
Thanks,
Miroslav
On Tue, 21 Feb 2017, Josh Poimboeuf wrote:
> On Fri, Feb 17, 2017 at 09:51:29AM +0100, Miroslav Benes wrote:
> > On Thu, 16 Feb 2017, Josh Poimboeuf wrote:
> > > What do you think about the following? I tried to put the logic in
> > > klp_complete_transition(),
On Mon, 13 Feb 2017, Josh Poimboeuf wrote:
> Here's v5 of the consistency model, targeted for 4.12. Only a few minor
> changes this time.
>
> v5:
> - return -EINVAL in __save_stack_trace_reliable()
> - only call show_stack() once
> - add save_stack_trace_tsk_reliable() define for
On Thu, 16 Feb 2017, Josh Poimboeuf wrote:
> On Thu, Feb 16, 2017 at 03:33:26PM +0100, Miroslav Benes wrote:
> >
> > > @@ -347,22 +356,36 @@ static int __klp_enable_patch(struct klp_patch
> > > *patch)
> > >
> > > pr_not
> @@ -347,22 +356,36 @@ static int __klp_enable_patch(struct klp_patch *patch)
>
> pr_notice("enabling patch '%s'\n", patch->mod->name);
>
> + klp_init_transition(patch, KLP_PATCHED);
> +
> + /*
> + * Enforce the order of the func->transition writes in
> + *
tries array
>
> Such issues are reported by checking unwind_error() and !unwind_done().
>
> Also add CONFIG_HAVE_RELIABLE_STACKTRACE so arch-independent code can
> determine at build time whether the function is implemented.
>
> Signed-off-by: Josh Poimboeuf <jpoim...@redhat.c
> > And finally, the section "Limitations" has this text under the first
> > bullet:
> >
> > + The patch must not change the semantic of the patched functions.
> >
> > The current implementation guarantees only that either the old
> > or the new function is called. The functions are
On Thu, 19 Jan 2017, Josh Poimboeuf wrote:
> From: Miroslav Benes <mbe...@suse.cz>
>
> Currently we do not allow patch module to unload since there is no
> method to determine if a task is still running in the patched code.
>
> The consistency model gives us the way bec
Petr has already mentioned majority of things I too found out, so only
couple of nits...
> diff --git a/Documentation/ABI/testing/sysfs-kernel-livepatch
> b/Documentation/ABI/testing/sysfs-kernel-livepatch
> index da87f43..24b6570 100644
> --- a/Documentation/ABI/testing/sysfs-kernel-livepatch
On Thu, 2 Feb 2017, Petr Mladek wrote:
> > diff --git a/Documentation/livepatch/livepatch.txt
> > b/Documentation/livepatch/livepatch.txt
> > index 7f04e13..fb00d66 100644
> > --- a/Documentation/livepatch/livepatch.txt
> > +++ b/Documentation/livepatch/livepatch.txt
>
> > + In that case,
tries array
>
> Such issues are reported by checking unwind_error() and !unwind_done().
>
> Also add CONFIG_HAVE_RELIABLE_STACKTRACE so arch-independent code can
> determine at build time whether the function is implemented.
>
> Signed-off-by: Josh Poimboeuf <jpoim...@redhat.com>
Looks good to me.
Reviewed-by: Miroslav Benes <mbe...@suse.cz>
Miroslav
On Wed, 1 Feb 2017, Josh Poimboeuf wrote:
> On Thu, Jan 19, 2017 at 09:46:08AM -0600, Josh Poimboeuf wrote:
> > Here's v4, based on linux-next/master. Mostly minor changes this time,
> > primarily due to Petr's v3 comments.
>
> So far, the only review comments have been related to the first
On Tue, 31 Jan 2017, Josh Poimboeuf wrote:
> On Tue, Jan 31, 2017 at 03:31:39PM +0100, Miroslav Benes wrote:
> > On Thu, 19 Jan 2017, Josh Poimboeuf wrote:
> >
> > > Expose the per-task patch state value so users can determine which tasks
> > > are holding up c
On Thu, 19 Jan 2017, Josh Poimboeuf wrote:
> Expose the per-task patch state value so users can determine which tasks
> are holding up completion of a patching operation.
>
> Signed-off-by: Josh Poimboeuf <jpoim...@redhat.com>
> Reviewed-by: Petr Mladek <pmla...@suse.com&
;
> Reviewed-by: Petr Mladek <pmla...@suse.com>
Acked-by: Miroslav Benes <mbe...@suse.cz>
Miroslav
> diff --git a/include/linux/stacktrace.h b/include/linux/stacktrace.h
> index 0a34489..8e8b67b 100644
> --- a/include/linux/stacktrace.h
> +++ b/include/linux/stacktrace.h
> @@ -18,6 +18,8 @@ extern void save_stack_trace_regs(struct pt_regs *regs,
> struct
> > > --- a/kernel/sched/idle.c
> > > +++ b/kernel/sched/idle.c
> > > @@ -9,6 +9,7 @@
> > > #include
> > > #include
> > > #include
> > > +#include
> > >
> > > #include
> > >
> > > @@ -264,6 +265,9 @@ static void do_idle(void)
> > >
> > > sched_ttwu_pending();
> > >
> @@ -740,6 +809,14 @@ int klp_register_patch(struct klp_patch *patch)
> return -ENODEV;
>
> /*
> + * Architectures without reliable stack traces have to set
> + * patch->immediate because there's currently no way to patch kthreads
> + * with the consistency
> diff --git a/samples/livepatch/livepatch-sample.c
> b/samples/livepatch/livepatch-sample.c
> index bb61c65..0625f38 100644
> --- a/samples/livepatch/livepatch-sample.c
> +++ b/samples/livepatch/livepatch-sample.c
> @@ -89,7 +89,6 @@ static int livepatch_init(void)
>
> static void
On Thu, 8 Dec 2016, Josh Poimboeuf wrote:
> Expose the per-task patch state value so users can determine which tasks
> are holding up completion of a patching operation.
>
> Signed-off-by: Josh Poimboeuf <jpoim...@redhat.com>
Reviewed-by: Miroslav Benes <mbe...@suse.cz>
Miroslav
On Thu, 8 Dec 2016, Josh Poimboeuf wrote:
> +void klp_start_transition(void)
> +{
> + struct task_struct *g, *task;
> + unsigned int cpu;
> +
> + WARN_ON_ONCE(klp_target_state == KLP_UNDEFINED);
> +
> + pr_notice("'%s': %s...\n", klp_transition_patch->mod->name,
> +
On Thu, 8 Dec 2016, Josh Poimboeuf wrote:
> For the consistency model we'll need to know the sizes of the old and
> new functions to determine if they're on the stacks of any tasks.
>
> Signed-off-by: Josh Poimboeuf <jpoim...@redhat.com>
Acked-by: Miroslav Benes <mbe...@suse.cz>
Miroslav
toul(buf, 10, );
> > > + ret = kstrtobool(buf, );
> > > if (ret)
> > > return -EINVAL;
> >
> > I would return "ret" here. It is -EINVAL as well but... ;-)
>
> That was a preexisting issue with the kstrtoul() return code, but I'll
> sneak your suggested change into this patch if nobody objects.
Fine with me.
Acked-by: Miroslav Benes <mbe...@suse.cz>
Miroslav
On Thu, 8 Dec 2016, Josh Poimboeuf wrote:
> Move functions related to the actual patching of functions and objects
> into a new patch.c file.
>
> Signed-off-by: Josh Poimboeuf <jpoim...@redhat.com>
Acked-by: Miroslav Benes <mbe...@suse.cz>
Miroslav
ff-by: Josh Poimboeuf <jpoim...@redhat.com>
Acked-by: Miroslav Benes <mbe...@suse.cz>
Miroslav
not necessarily
> fully applied).
>
> - Patched means that an object's funcs are registered with ftrace and
> added to the klp_ops func stack.
>
> Also, since these states are binary, represent them with booleans
> instead of ints.
>
> Signed-off-by: Josh Poimboeuf <
> > > diff --git a/kernel/livepatch/patch.c b/kernel/livepatch/patch.c
> > > index 5efa262..e79ebb5 100644
> > > --- a/kernel/livepatch/patch.c
> > > +++ b/kernel/livepatch/patch.c
> > > @@ -29,6 +29,7 @@
> > > #include
> > > #include
> > > #include "patch.h"
> > > +#include "transition.h"
>
<jpoim...@redhat.com>
I believe there is no harm doing that and we need it for
_TIF_PATCH_PENDING later.
Reviewed-by: Miroslav Benes <mbe...@suse.cz>
Miroslav
>
> The bit is included in the _TIF_USER_WORK_MASK macro so that
> do_notify_resume() and klp_update_patch_state() get called when the bit
> is set.
>
> Signed-off-by: Josh Poimboeuf <jpoim...@redhat.com>
Looks good to me. You can add my
Reviewed-by: Miroslav Benes <mbe...@suse.cz>
Miroslav
>
> The bit is placed in the _TIF_ALLWORK_MASK macro, which results in
> exit_to_usermode_loop() calling klp_update_patch_state() when it's set.
>
> Signed-off-by: Josh Poimboeuf <jpoim...@redhat.com>
Reviewed-by: Miroslav Benes <mbe...@suse.cz>
Miroslav
On Mon, 19 Dec 2016, Josh Poimboeuf wrote:
> On Mon, Dec 19, 2016 at 05:25:19PM +0100, Miroslav Benes wrote:
> > On Thu, 8 Dec 2016, Josh Poimboeuf wrote:
> >
> > > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> > > index 215612c..b4a6663 100644
> > &g
ne the flags explicitly.
>
> Signed-off-by: Josh Poimboeuf <jpoim...@redhat.com>
With _TIF_SINGLESTEP and _TIF_NEED_RESCHED swapped you can add my
Reviewed-by: Miroslav Benes <mbe...@suse.cz>
Miroslav
> ---
> arch/x86/include/asm/thread_info.h | 9 -
> 1 fil
On Thu, 8 Dec 2016, Josh Poimboeuf wrote:
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index 215612c..b4a6663 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -155,6 +155,7 @@ config X86
> select HAVE_PERF_REGS
> select HAVE_PERF_USER_STACK_DUMP
> select
On Thu, 28 Apr 2016, Josh Poimboeuf wrote:
> Change livepatch to use a basic per-task consistency model. This is the
> foundation which will eventually enable us to patch those ~10% of
> security patches which change function or data semantics. This is the
> biggest remaining piece needed to
On Wed, 4 May 2016, Josh Poimboeuf wrote:
> On Wed, May 04, 2016 at 04:12:05PM +0200, Petr Mladek wrote:
> > On Wed 2016-05-04 14:39:40, Petr Mladek wrote:
> > >*
> > >* Note that the task must never be migrated to the target
> > >* state when being inside this
[...]
> +static int klp_target_state;
[...]
> +void klp_init_transition(struct klp_patch *patch, int state)
> +{
> + struct task_struct *g, *task;
> + unsigned int cpu;
> + struct klp_object *obj;
> + struct klp_func *func;
> + int initial_state = !state;
> +
> +
On Wed, 4 May 2016, Josh Poimboeuf wrote:
> On Wed, May 04, 2016 at 10:42:23AM +0200, Petr Mladek wrote:
> > On Thu 2016-04-28 15:44:48, Josh Poimboeuf wrote:
> > > Change livepatch to use a basic per-task consistency model. This is the
> > > foundation which will eventually enable us to patch
On Tue, 3 May 2016, Petr Mladek wrote:
> On Thu 2016-04-28 15:44:41, Josh Poimboeuf wrote:
> > Add the TIF_PATCH_PENDING thread flag to enable the new livepatch
> > per-task consistency model for powerpc. The bit getting set indicates
> > the thread has a pending patch which needs to be applied
On Tue, 26 Apr 2016, Balbir Singh wrote:
> > + + Anything inlined into __schedule() can not be patched.
> > +
> > +The switch_to macro is inlined into __schedule(). It switches the
> > +context between two processes in the middle of the macro. It does
> > +not save RIP in x86_64
On Thu, 14 Apr 2016, Jessica Yu wrote:
> +++ Miroslav Benes [14/04/16 15:28 +0200]:
> > On Wed, 13 Apr 2016, Jessica Yu wrote:
>
> > > A second concern I have is that apply_relocate_add() relies on
> > > sections like .stubs and .toc (for 64-bit) and .init.plt and .
On Thu, 14 Apr 2016, Michael Ellerman wrote:
> On Thu, 2016-04-14 at 14:01 +0200, Miroslav Benes wrote:
> > On Wed, 13 Apr 2016, Michael Ellerman wrote:
>
> > > static void klp_disable_func(struct klp_func *func)
> > > {
> > > struct klp_ops *ops;
On Wed, 13 Apr 2016, Jessica Yu wrote:
> +++ Miroslav Benes [13/04/16 15:01 +0200]:
> > On Wed, 13 Apr 2016, Michael Ellerman wrote:
> >
> > > This series adds live patching support for powerpc (ppc64le only ATM).
> > >
> > > It's unchan
On Thu, 14 Apr 2016, Miroslav Benes wrote:
> On Wed, 13 Apr 2016, Michael Ellerman wrote:
>
> > Add the powerpc specific livepatch definitions. In particular we provide
> > a non-default implementation of klp_get_ftrace_location().
> >
> > This is required beca
On Wed, 13 Apr 2016, Michael Ellerman wrote:
> Add the powerpc specific livepatch definitions. In particular we provide
> a non-default implementation of klp_get_ftrace_location().
>
> This is required because the location of the mcount call is not constant
> when using -mprofile-kernel (which
On Wed, 13 Apr 2016, Michael Ellerman wrote:
> When livepatch tries to patch a function it takes the function address
> and asks ftrace to install the livepatch handler at that location.
> ftrace will look for an mcount call site at that exact address.
>
> On powerpc the mcount location is not
On Wed, 13 Apr 2016, Michael Ellerman wrote:
> This series adds live patching support for powerpc (ppc64le only ATM).
>
> It's unchanged since the version I posted on March 24, with the exception that
> I've dropped the first patch, which was a testing-only patch.
>
> If there's no further
> > potential developers of the framework itself.
>
> Thanks for starting the efforts; this is really needed if we want the
> infrastructure to be used also by someone else than its developers :)
Indeed. Great job, Petr.
> [ ... snip ... ]
> > +7. Limitations
> &g
On Wed, 9 Mar 2016, Balbir Singh wrote:
>
> The previous revision was nacked by Torsten, but compared to the alternatives
> at hand I think we should test this approach. Ideally we want all the
> complexity
> of live-patching in the live-patching code and not in the patch. The other
> option
>
Hi,
On Fri, 4 Mar 2016, Michael Ellerman wrote:
> Hi Petr,
>
> On Thu, 2016-03-03 at 17:52 +0100, Petr Mladek wrote:
>
> > From: Balbir Singh
> >
> > Changelog v4:
> > 1. Renamed klp_matchaddr() to klp_get_ftrace_location()
> >and used it just to convert
On Wed, 10 Feb 2016, Torsten Duwe wrote:
> diff --git a/arch/powerpc/include/asm/livepatch.h
> b/arch/powerpc/include/asm/livepatch.h
> new file mode 100644
> index 000..44e8a2d
> --- /dev/null
> +++ b/arch/powerpc/include/asm/livepatch.h
> @@ -0,0 +1,45 @@
> +/*
> + * livepatch.h -
[ added Petr to CC list ]
On Mon, 25 Jan 2016, Torsten Duwe wrote:
> * create the appropriate files+functions
> arch/powerpc/include/asm/livepatch.h
> klp_check_compiler_support,
> klp_arch_set_pc
> arch/powerpc/kernel/livepatch.c with a stub for
>
[ Jessica added to CC list so she is aware that there are plans to
implement livepatch on ppc64le ]
On Tue, 26 Jan 2016, Torsten Duwe wrote:
> On Tue, Jan 26, 2016 at 11:50:25AM +0100, Miroslav Benes wrote:
> > > + */
> > > +int klp_write_module_reloc(struct module *mo
95 matches
Mail list logo