On Tue, 13 Aug 2019 10:26:29 +0200, Paolo Bonzini <[email protected]> wrote:
> On 09/08/19 17:59, Adalbert Lazăr wrote:
> > + prepare_to_swait_exclusive(&vcpu->wq, &wait,
> > + TASK_INTERRUPTIBLE);
> > +
> > + if (kvm_vcpu_check_block(vcpu) < 0)
> > + break;
> > +
> > + waited = true;
> > + schedule();
> > +
> > + if (kvm_check_request(KVM_REQ_INTROSPECTION, vcpu)) {
> > + do_kvmi_work = true;
> > + break;
> > + }
> > + }
> >
> > - waited = true;
> > - schedule();
> > + finish_swait(&vcpu->wq, &wait);
> > +
> > + if (do_kvmi_work)
> > + kvmi_handle_requests(vcpu);
> > + else
> > + break;
> > }
>
> Is this needed? Or can it just go back to KVM_RUN and handle
> KVM_REQ_INTROSPECTION there (in which case it would be basically
> premature optimization)?
>
It might still be needed, unless we can get back to this function.
The original commit message for this change was this:
kvm: do not abort kvm_vcpu_block() in order to handle KVMI requests
Leaving kvm_vcpu_block() in order to handle a request such as 'pause',
would cause the vCPU to enter the guest when resumed. Most of the
time this does not appear to be an issue, but during early boot it
can happen for a non-boot vCPU to start executing code from areas that
first needed to be set up by vCPU #0.
In a particular case, vCPU #1 executed code which resided in an area
not covered by a memslot, which caused an EPT violation that got
turned in mmu_set_spte() into a MMIO request that required emulation.
Unfortunatelly, the emulator tripped, exited to userspace and the VM
was aborted.
_______________________________________________
Virtualization mailing list
[email protected]
https://lists.linuxfoundation.org/mailman/listinfo/virtualization