On Wed, Feb 21, 2018 at 06:43:00PM +0100, Andrew Jones wrote:
> On Thu, Feb 15, 2018 at 10:03:05PM +0100, Christoffer Dall wrote:
> > So far this is mostly (see below) a copy of the legacy non-VHE switch
> > function, but we will start reworking these functions in separate
> > directions to work on VHE and non-VHE in the most optimal way in later
> > patches.
> > 
> > The only difference after this patch between the VHE and non-VHE run
> > functions is that we omit the branch-predictor variant-2 hardening for
> > QC Falkor CPUs, because this workaround is specific to a series of
> > non-VHE ARMv8.0 CPUs.
> > 
> > Reviewed-by: Marc Zyngier <marc.zyng...@arm.com>
> > Signed-off-by: Christoffer Dall <christoffer.d...@linaro.org>
> > ---
> > 
> > Notes:
> >     Changes since v3:
> >      - Added BUG() to 32-bit ARM VHE run function
> >      - Omitted QC Falkor BP Hardening functionality from VHE-specific
> >        function
> >     
> >     Changes since v2:
> >      - Reworded commit message
> >     
> >     Changes since v1:
> >      - Rename kvm_vcpu_run to kvm_vcpu_run_vhe and rename __kvm_vcpu_run to
> >        __kvm_vcpu_run_nvhe
> >      - Removed stray whitespace line
> > 
> >  arch/arm/include/asm/kvm_asm.h   |  5 ++-
> >  arch/arm/kvm/hyp/switch.c        |  2 +-
> >  arch/arm64/include/asm/kvm_asm.h |  4 ++-
> >  arch/arm64/kvm/hyp/switch.c      | 66 
> > +++++++++++++++++++++++++++++++++++++++-
> >  virt/kvm/arm/arm.c               |  5 ++-
> >  5 files changed, 77 insertions(+), 5 deletions(-)
> > 
> 
> ...
> 
> > diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> > index 2062d9357971..5bd879c78951 100644
> > --- a/virt/kvm/arm/arm.c
> > +++ b/virt/kvm/arm/arm.c
> > @@ -736,7 +736,10 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, 
> > struct kvm_run *run)
> >             if (has_vhe())
> >                     kvm_arm_vhe_guest_enter();
> >  
> > -           ret = kvm_call_hyp(__kvm_vcpu_run, vcpu);
> > +           if (has_vhe())
> > +                   ret = kvm_vcpu_run_vhe(vcpu);
> > +           else
> > +                   ret = kvm_call_hyp(__kvm_vcpu_run_nvhe, vcpu);
> >  
> >             if (has_vhe())
> >                     kvm_arm_vhe_guest_exit();
> 
> We can combine these has_vhe()'s
> 
>  if (has_vhe()) {
>     kvm_arm_vhe_guest_enter();
>     ret = kvm_vcpu_run_vhe(vcpu);
>     kvm_arm_vhe_guest_exit();
>  } else
>     ret = kvm_call_hyp(__kvm_vcpu_run_nvhe, vcpu);

Maybe even do a cleanup patch that removes
kvm_arm_vhe_guest_enter/exit by putting the daif
masking/restoring directly into kvm_vcpu_run_vhe()?

Thanks,
drew
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

Reply via email to