On Mon, Mar 02, 2026 at 09:38:17AM +0100, Miroslav Benes wrote:
> Him
> 
> > > We store test modules in tools/testing/selftests/livepatch/test_modules/
> > > now. Could you move klp_test_module.c there, please? You might also reuse
> > > existing ones for the purpose perhaps.
> > 
> > IIUC, tools/testing/selftests/livepatch/test_modules/ is more like an out
> > of tree module. In the case of testing klp-build, we prefer to have it to
> > work the same as in-tree modules. This is important because klp-build
> > is a toolchain, and any changes of in-tree Makefiles may cause issues
> > with klp-build. Current version can catch these issues easily. If we build
> > the test module as an OOT module, we may miss some of these issues.
> > In the longer term, we should consider adding klp-build support to build
> > livepatch for OOT modules. But for now, good test coverage for in-tree
> > modules are more important.
> 
> Ok. I thought it would not matter but it is a fair point.
> 
> > > What about vmlinux? I understand that it provides a lot more flexibility
> > > to have separate functions for testing but would it be somehow sufficient
> > > to use the existing (real) kernel functions? Like cmdline_proc_show() and
> > > such which we use everywhere else? Or would it be to limited? I am fine if
> > > you find it necessary in the end. I just think that reusing as much as
> > > possible is generally a good approach.
> > 
> > I think using existing functions would be too limited, and Joe seems to
> > agree with this based on his experience. To be able to test corner cases
> > of the compiler/linker, such as LTO, we need special code patterns.
> > OTOH, if we want to use an existing kernel function for testing, it needs
> > to be relatively stable, i.e., not being changed very often. It is not 
> > always
> > easy to find some known to be stable code that follows specific patterns.
> > If we add dedicated code as test targets, things will be much easier
> > down the road.
> 
> Fair enough.
> 

I've been tinkering with ideas in this space, though I took it in a very
different direction.

(First a disclaimer, this effort is largely the result of vibe coding
with Claude to prototype testing concepts, so I don't believe any of it
is reliable or upstream-worthy at this point.)

>From a top-down perspective, I might start with the generated test
reports:

- https://file.rdu.redhat.com/~jolawren/artifacts/report.html
- https://file.rdu.redhat.com/~jolawren/artifacts/report.txt

and then in my own words:

1- I'm interested in testing several kernel configurations (distros,
debug, thinLTO) as well as toolchains (gcc, llvm) against the same
source tree and machine.  I call these config/toolchain pairs a testing
"profile".  In the report examples, these are combos like "fedora-43 +
virtme-ng" and "virtme-ng + thin-lto".

2- For test cases, a few possible results:

  PASS    - should build / load / run
            e.g. cmdline-string.patch
  FAIL*   - unexpected failure to build / load / run
            e.g. some new bug in klp-build
  XFAIL   - expected to build / load / run failure
            e.g. "no changed detected" patch
  XPASS*  - unexpected build / load / run success
            e.g. "no changed detected" patch actually created a .ko

* These would be considered interesting to look at.  Did we find a new
  bug, or maybe an existing bug is now fixed?

3- Test cases and makefile target workflows are split into build and
runtime parts.

4- Based on kpatch-build experience, test cases are further divided into
"quick" and "long" sets with the understanding that klp-build testing
takes a non-trivial amount of time.

5- Two patch targets:

a) current-tree - target the user's current source tree
b) patched-tree - (temporarily) patch the user's tree to *exactly* what
                  we need to target

Why?  In the kpatch-build project, patching the current-tree meant we
had to rebase patches for every release.  We also had to hunt and find
precise scenarios across the kernel tree to test, hoping they wouldn't
go away in future versions.  In other cases, the kernel or compiler
changed and we weren't testing the original problem any longer.

That said, patching a dummy patched-tree isn't be perfect either,
particularly in the runtime sense.  You're not testing a release kernel,
but something slightly different.

(Tangent: kpatch-build implemented a unit test scheme that cached object
files for even greater speed and fixed testing.  I haven't thought about
how a similar idea might work for klp-build.)

6- Two patch models:

a) static .patch files
b) scripted .patch generation

Why?  Sometimes a test like cmdline-string.patch is sufficient and
stable.  Other times it's not.  For example, the recount-many-file test
in this branch is implemented via a script.  This allows the test to be
dynamic and potentially avoid the rebasing problem mentioned above.

7- Build verification including ELF analysis.  Not very mature in this
branch, but it would be nice to be able to build on it:

  ======================================================================
  BUILD VERIFICATION
  ======================================================================
 
  klp-build exit code is 0
  Module exists: livepatch-cmdline-string.ko
  verify_diff_log_contains('changed function: cmdline_proc_show'): OK 
 
  ELF Analysis:
  klp_object[0]:
    .name = NULL (vmlinux)
  VERIFIED: klp_object.name = NULL (vmlinux)
      klp_func[0]:
        .old_name = "cmdline_proc_show"  [-> .rodata+0x15d]
        .new_func -> cmdline_proc_show
        .old_sympos = 0
      VERIFIED: klp_func.old_name = 'cmdline_proc_show'
      VERIFIED: klp_func.new_func -> cmdline_proc_show

Perhaps even extending this to the intermediate klp-tmp/ files?  This
would aid in greater sanity checking of what's produced, but also in
verifying that our test is still testing what it originally set out to.
(i.e. Is the thinLTO suffix test still generating two common symbols
with a different hash suffix?)

8- Probably more I've already forgotten about :) Cross-compilation may
be interesting for build testing in the future.  For the full AI created
commentary, there's 
https://github.com/joe-lawrence/linux/blob/klp-build-selftests/README.md 

> > I was using kpatch for testing. I can replace it with insmod.
> 

Do the helpers in functions.sh for safely loading and unloading
livepatches (that wait for the transition, etc.) aid here?

> > > And a little bit of bikeshedding at the end. I think it would be more
> > > descriptive if the new config options and tests (test modules) have
> > > klp-build somewhere in the name to keep it clear. What do you think?
> > 
> > Technically, we can also use these tests to test other toolchains, for
> > example, kpatch-build. I don't know ksplice or kGraft enough to tell
> > whether they can benefit from these tests or not. OTOH, I am OK
> > changing the name/description of these config options.
> 
> I would prefer it, thank you. Unless someone else objects of course.
> 

To continue the bike shedding, in my branch, I had dumped this all under
a new tools/testing/klp-build subdirectory as my focus was to put
klp-build through the paces.  It does load the generated livepatches in
the runtime testing, but as only as a sanity check.  With that, it
didn't touch CONFIG or intermix testing with the livepatch/ set.

If we do end up supplementing the livepatch/ with klp-build tests, then
I agree that naming them (either filename prefix or subdirectory) would
be nice.

But first, is it goal for klp-build to be the build tool (rather than
simple module kbuild) for the livepatching .ko selftests?

--
Joe


Reply via email to