Control: reassign -1 src:linux 6.1.158-1
Control: tags -1 + moreinfo

On Thu, Dec 25, 2025 at 12:18:30PM +0100, Ivan wrote:
> Package: linux-image-amd64
> Version: 6.1.158-1
> Severity: normal
> X-Debbugs-Cc: [email protected]
> 
> Dear Maintainer,
> 
> What led up to the situation?
> The system was updated from a working kernel linux-image-6.1.0-37-amd64
> to linux-image-6.1.0-41-amd64 as part of a normal system upgrade.
> 
> What exactly did you do (or not do) that was effective (or ineffective)?
> After upgrading to linux-image-6.1.0-41-amd64 the system became unstable
> during early boot. Reinstalling the same kernel package did not fix the issue.
> Downgrading back to linux-image-6.1.0-37-amd64 restored normal operation.
> 
> What was the outcome of this action?
> With linux-image-6.1.0-41-amd64 the system fails to boot reliably and shows
> early boot/initramfs related issues. With linux-image-6.1.0-37-amd64 the
> system works correctly.
> 
> What outcome did you expect instead?
> The system should boot normally with linux-image-6.1.0-41-amd64, as it did
> with the previous kernel version.

We need more information here on what is failing. Can you boot (in
case not yet configured as such) without quiet kernel parameter and
get the information where the boot fails? What are the messages
printed on the console? Ideally you record those completely via
netconsole.

Another thing to do is, since you it seems to fail reliably with the
broken version, can you bisect the changes please between 6.1.140-1
and 6.1.158-1.

Given there were as well a couple of released Debian versions between
6.1.140-1 and 6.1.158-1 please do isolate the range of Debian
revisions where the boot failure starts.

The released versions were:

6.1.158-1
6.1.153-1
6.1.148-1
6.1.147-1
6.1.140-1

so first bisect those versions to understand which caused the problem.
Let's hypotetically assume it still boots with 6.1.153-1 but fails
with 6.1.158-1, then it would be great if you can do the bisect
stating here. That would involve involve compiling and testing a few
kernels.

    git clone --single-branch -b linux-6.1.y 
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
    cd linux-stable
    git checkout v6.1.153
    cp /boot/config-$(uname -r) .config
    yes '' | make localmodconfig
    make savedefconfig
    mv defconfig arch/x86/configs/my_defconfig

    # test 6.1.153 to ensure this is "good"
    make my_defconfig
    make -j $(nproc) bindeb-pkg
    ... install the resulting .deb package and confirm it successfully boots

    # test 6.1.158 to ensure this is "bad"
    git checkout v6.1.158
    make my_defconfig
    make -j $(nproc) bindeb-pkg
    ... install the resulting .deb package and confirm it fails to boot

With that confirmed, the bisection can start:

    git bisect start
    git bisect good v6.1.153
    git bisect bad v6.1.158

In each bisection step git checks out a state between the oldest
known-bad and the newest known-good commit. In each step test using:

    make my_defconfig
    make -j $(nproc) bindeb-pkg
    ... install, try to boot / verify if problem exists

and if the problem is hit run:

    git bisect bad

and if the problem doesn't trigger run:

    git bisect good

. Please pay attention to always select the just built kernel for
booting, it won't always be the default kernel picked up by grub.

Iterate until git announces to have identified the first bad commit.

Then provide the output of

    git bisect log

In the course of the bisection you might have to uninstall previous
kernels again to not exhaust the disk space in /boot. Also in the end
uninstall all self-built kernels again.

Let us know please the results. 

One additional thing, your kernel is tained with proprietary modules,
which are tose? Please do the bisect experiments as well without
involving those to make it possible to generate a upstream report.

Did you had sufficient space in /boot? 

Regards,
Salvatore

Reply via email to