On Thu, Nov 29, 2018 at 10:44:30AM +0100, Petr Mladek wrote:
> The atomic replace and cumulative patches were introduced as a more secure
> way to handle dependent patches. They simplify the logic:
> 
>   + Any new cumulative patch is supposed to take over shadow variables
>     and changes made by callbacks from previous livepatches.
> 
>   + All replaced patches are discarded and the modules can be unloaded.
>     As a result, there is only one scenario when a cumulative livepatch
>     gets disabled.
> 
> The different handling of "normal" and cumulative patches might cause
> confusion. It would make sense to keep only one mode. On the other hand,
> it would be rude to enforce using the cumulative livepatches even for
> trivial and independent (hot) fixes.
> 
> This patch removes the stack of patches. The list of enabled patches
> is still needed but the ordering is not longer enforced.
                                      ^^^^^^^^^^
s/not longer/no longer

> 
> Note that it is not possible to catch all possible dependencies. It is
> the responsibility of the livepatch authors to decide.
> 
> Nevertheless this patch prevents having two patches for the same function
> enabled at the same time after the transition finishes. It might help
> to catch obvious mistakes. But more importantly, we do not need to
> handle situation when a patch in the middle of the function stack
  ^^^^^^^^^^^^^^^^
s/handle situation/handle a situation

> (ops->func_stack) is being removed.
> 
> Signed-off-by: Petr Mladek <pmla...@suse.com>
> ---

Acked-by: Joe Lawrence <joe.lawre...@redhat.com>

>  Documentation/livepatch/cumulative-patches.txt | 11 ++----
>  Documentation/livepatch/livepatch.txt          | 30 ++++++++-------
>  kernel/livepatch/core.c                        | 51 
> ++++++++++++++++++++++++--
>  3 files changed, 68 insertions(+), 24 deletions(-)
> 
>
> [ ... snip ... ]
>
> diff --git a/Documentation/livepatch/livepatch.txt 
> b/Documentation/livepatch/livepatch.txt
> index ba6e83a08209..3c150ab19b99 100644
> --- a/Documentation/livepatch/livepatch.txt
> +++ b/Documentation/livepatch/livepatch.txt
> @@ -143,9 +143,9 @@ without HAVE_RELIABLE_STACKTRACE are not considered fully 
> supported by
>  the kernel livepatching.
>  
>  The /sys/kernel/livepatch/<patch>/transition file shows whether a patch
> -is in transition.  Only a single patch (the topmost patch on the stack)
> -can be in transition at a given time.  A patch can remain in transition
> -indefinitely, if any of the tasks are stuck in the initial patch state.
> +is in transition.  Only a single patch can be in transition at a given
> +time.  A patch can remain in transition indefinitely, if any of the tasks
> +are stuck in the initial patch state.
>  
>  A transition can be reversed and effectively canceled by writing the
>  opposite value to the /sys/kernel/livepatch/<patch>/enabled file while
> @@ -329,9 +329,10 @@ The livepatch gets enabled by calling klp_enable_patch() 
> typically from
>  module_init() callback. The system will start using the new implementation
>  of the patched functions at this stage.
>  
> -First, the addresses of the patched functions are found according to their
> -names. The special relocations, mentioned in the section "New functions",
> -are applied. The relevant entries are created under
> +First, possible conflicts are checked for non-cummulative patches with
                                             ^^^^^^^^^^^^^^^
s/non-cummulative/non-cumulative

>
> [ ... snip ... ]
>
> diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
> index 0ce752e9e8bb..d8f6f49ac6b3 100644
> --- a/kernel/livepatch/core.c
> +++ b/kernel/livepatch/core.c
> @@ -122,6 +122,47 @@ static struct klp_object *klp_find_object(struct 
> klp_patch *patch,
>       return NULL;
>  }
>  
> +static int klp_check_obj_conflict(struct klp_patch *patch,
> +                               struct klp_object *old_obj)
> +{
> +     struct klp_object *obj;
> +     struct klp_func *func, *old_func;
> +
> +     obj = klp_find_object(patch, old_obj);
> +     if (!obj)
> +             return 0;
> +
> +     klp_for_each_func(old_obj, old_func) {
> +             func = klp_find_func(obj, old_func);
> +             if (!func)
> +                     continue;
> +
> +             pr_err("Function '%s,%lu' in object '%s' has already been 
> livepatched.\n",
> +                    func->old_name, func->old_sympos ? func->old_sympos : 1,
> +                    obj->name ? obj->name : "vmlinux");
> +             return -EINVAL;
> +     }
> +
> +     return 0;
> +}

Should we add a self-test check for this condition?  Let me know and I
can give it a go if you want to include with v15.

-- Joe

Reply via email to