On 2/14/26 7:32 PM, Christian Zigotzky wrote:
Hello,

KVM PR and KVM HV do not work if the kernel was compiled with PREEMPT.

The entire FSL Cyrus+ board freezes when using KVM HV with PREEMPT.

The guest kernel doesn't boot if we use KVM PR with a PREEMPT kernel on the PA Semi Nemo board.

We were previously able to disable PREEMPT in the kernel configuration, but the latest git kernels now enable it by default and it is no longer possible to disable it.

I created a patch for disabling PREEMPT today. [1]

Is it possible to let us decide whether to activate PREEMPT or not?

Thanks in advance,

Christian

[1] https://raw.githubusercontent.com/chzigotzky/kernels/ a74fa6179eaeafcea7ad89f0e61c30ace038daf2/patches/X1000/ Kconfig.preempt.patch
[2] Bug report: https://github.com/chzigotzky/kernels/issues/19


Hi.

Do you have any trace why it is stuck? That would be useful.



My initial take is cond_resched is a nop. So we might be stuck there.
Eventually it should have come out though.

Could you please give the below patch a try and let me know?
Note: This likely still needs lazy bit handling. So keep in preempt=full.
(Not tested)


diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 7667563fb9ff..fe215d1177fe 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -4901,7 +4901,7 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 
time_limit,
        }
if (need_resched())
-               cond_resched();
+               schedule();
kvmppc_update_vpas(vcpu); diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 9a89a6d98f97..54963c1d8b58 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -86,7 +86,7 @@ int kvmppc_prepare_to_enter(struct kvm_vcpu *vcpu)
        while (true) {
                if (need_resched()) {
                        local_irq_enable();
-                       cond_resched();
+                       schedule();
                        hard_irq_disable();
                        continue;
                }

Reply via email to