On Fri, 13 Mar 2026 07:19:26 -0600, Jim Cromie <[email protected]> wrote:
> [...]
> smaller alignments, *and* scripts/sorttable.c does not tolerate the
> added ALIGN(8) padding.
>
> Reported-by: kernel test robot <[email protected]>
> Closes: https://lore.kernel.org/oe-lkp/[email protected]
> Signed-off-by: Jim Cromie <[email protected]>
comments may be wrong.
>
>
> diff --git a/include/asm-generic/vmlinux.lds.h
> b/include/asm-generic/vmlinux.lds.h
> index eeb070f330bd..a2ba7e3d9994 100644
> --- a/include/asm-generic/vmlinux.lds.h
> +++ b/include/asm-generic/vmlinux.lds.h
> @@ -212,11 +212,13 @@
> [ ... skip 7 lines ... ]
>
> #define BOUNDED_SECTION_POST_LABEL(_sec_, _label_, _BEGIN_, _END_) \
> + . = ALIGN(8); \
> _label_##_BEGIN_ = .; \
> KEEP(*(_sec_)) \
> _label_##_END_ = .;
This affects a lot of existing BOUNDED_SECTION_BY. I agree that it is
not a big issue (most of them already have ALIGN(8) or ALIGN(32), but
some have ALIGN(4) or just not aligned at all). I think this can increase
the size of the kernel in other places.
What do you think about a new macro or a
BOUNDED_SECTION_BY_ALIGNED(sec,label,align) with explicit aligement?
> @@ -867,15 +869,21 @@
> [ ... skip 15 lines ... ]
> . = ALIGN(2); \
> .orc_unwind : AT(ADDR(.orc_unwind) - LOAD_OFFSET) { \
> - BOUNDED_SECTION_BY(.orc_unwind, _orc_unwind) \
> + __start_orc_unwind = .; \
> + KEEP(*(.orc_unwind)) \
> + __stop_orc_unwind = .; \
You already noticed an issue here for example, and you had to manually
expand the macro to "disable" the align. This is error-prone, I think it is
better to keep BOUNDED_SECTION_BY here.
Note: I don't understand well linker scripts and all the implications, my
Note: I don't understand well linker scripts and all the implications, my
--
Louis Chauvet <[email protected]>