Re: [PATCHv2] kvm: arm64: Add SVE support for nVHE.

2021-02-08 Thread Dave Martin
On Fri, Feb 05, 2021 at 12:12:51AM +, Daniel Kiss wrote:
> 
> 
> > On 4 Feb 2021, at 18:36, Dave Martin  wrote:
> > 
> > On Tue, Feb 02, 2021 at 07:52:54PM +0100, Daniel Kiss wrote:
> >> CPUs that support SVE are architecturally required to support the
> >> Virtualization Host Extensions (VHE), so far the kernel supported
> >> SVE alongside KVM with VHE enabled. In same cases it is desired to
> >> run nVHE config even when VHE is available.
> >> This patch add support for SVE for nVHE configuration too.
> >> 
> >> Tested on FVP with a Linux guest VM that run with a different VL than
> >> the host system.
> >> 
> >> Signed-off-by: Daniel Kiss 

[...]

> >> diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
> >> index 3e081d556e81..8f29b468e989 100644
> >> --- a/arch/arm64/kvm/fpsimd.c
> >> +++ b/arch/arm64/kvm/fpsimd.c
> >> @@ -42,6 +42,16 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu)
> >>if (ret)
> >>goto error;
> >> 
> >> +  if (!has_vhe() && vcpu->arch.sve_state) {
> >> +  void *sve_state_end = vcpu->arch.sve_state +
> >> +  SVE_SIG_REGS_SIZE(
> >> +  
> >> sve_vq_from_vl(vcpu->arch.sve_max_vl));
> >> +  ret = create_hyp_mappings(vcpu->arch.sve_state,
> >> +sve_state_end,
> >> +PAGE_HYP);
> >> +  if (ret)
> >> +  goto error;
> >> +  }
> >>vcpu->arch.host_thread_info = kern_hyp_va(ti);
> >>vcpu->arch.host_fpsimd_state = kern_hyp_va(fpsimd);
> >> error:
> >> @@ -109,10 +119,22 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
> >>local_irq_save(flags);
> >> 
> >>if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
> >> +  if (guest_has_sve) {
> >> +  if (has_vhe())
> >> +  __vcpu_sys_reg(vcpu, ZCR_EL1) = 
> >> read_sysreg_s(SYS_ZCR_EL12);
> >> +  else {
> >> +  __vcpu_sys_reg(vcpu, ZCR_EL1) = 
> >> read_sysreg_s(SYS_ZCR_EL1);
> >> +  /*
> >> +   * vcpu could set ZCR_EL1 to a shorter VL then 
> >> the max VL but
> >> +   * the context is still valid there. Save the 
> >> whole context.
> >> +   * In nVHE case we need to reset the ZCR_EL1 
> >> for that
> >> +   * because the save will be done in EL1.
> >> +   */
> >> +  
> >> write_sysreg_s(sve_vq_from_vl(vcpu->arch.sve_max_vl) - 1,
> >> + SYS_ZCR_EL1);
> > 
> > This still doesn't feel right.  We're already in EL1 here I think, in
> > which case ZCR_EL1 has an immediate effect on what state the
> > architecture guarantees to preserve: if we need to change ZCR_EL1, it's
> > because it might be wrong.  If it's wrong, it might be too small.  And
> > if it's too small, we may have already lost some SVE register bits that
> > the guest cares about.
> "On taking an exception from an Exception level that is more constrained
>  to a target Exception level that is less constrained, or on writing a larger 
> value
>  to ZCR_ELx.LEN, then the previously inaccessible bits of these registers 
> that 
>  become accessible have a value of either zero or the value they had before
>  executing at the more constrained size.”
> If the CPU zeros the register when ZCR is written or exception is taken my 
> reading
>  of the above is that the register content maybe lost when we land in EL2.
> No code shall not count on the register content after reduces the VL in ZCR.
> 
> I see my comment also not clear enough.
> Maybe we shouldn’t save the guest’s sve_max_vl here, would enough to save up 
> to
> the actual VL.

Maybe you're right, but I may be missing some information here.

Can you sketch out more explicitly how it works, showing how all the
bits the host cares about (and only those bits) are saved/restored for
the host, and how all the bits the guest cares about (and only those
bits) are saved/restored for the guest?


Various optimisations are possible, but there is a risk of breaking
assumptions elsewhere.  For example, the KVM_{SET,GET}_ONE_REG code
makes assmuptions about the layout of the data in
vcpu->

Re: [PATCHv2] kvm: arm64: Add SVE support for nVHE.

2021-02-04 Thread Dave Martin
On Tue, Feb 02, 2021 at 07:52:54PM +0100, Daniel Kiss wrote:
> CPUs that support SVE are architecturally required to support the
> Virtualization Host Extensions (VHE), so far the kernel supported
> SVE alongside KVM with VHE enabled. In same cases it is desired to
> run nVHE config even when VHE is available.
> This patch add support for SVE for nVHE configuration too.
> 
> Tested on FVP with a Linux guest VM that run with a different VL than
> the host system.
> 
> Signed-off-by: Daniel Kiss 
> ---
>  arch/arm64/Kconfig  |  7 -
>  arch/arm64/include/asm/fpsimd.h |  6 
>  arch/arm64/include/asm/fpsimdmacros.h   | 24 +--
>  arch/arm64/include/asm/kvm_host.h   | 17 +++
>  arch/arm64/kernel/entry-fpsimd.S|  5 
>  arch/arm64/kvm/arm.c|  5 
>  arch/arm64/kvm/fpsimd.c | 39 -
>  arch/arm64/kvm/hyp/fpsimd.S | 15 ++
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 34 +++--
>  arch/arm64/kvm/hyp/nvhe/switch.c| 29 +-
>  arch/arm64/kvm/reset.c  |  6 +---
>  11 files changed, 126 insertions(+), 61 deletions(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index f39568b28ec1..049428f1bf27 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -1676,7 +1676,6 @@ endmenu
>  config ARM64_SVE
>   bool "ARM Scalable Vector Extension support"
>   default y
> - depends on !KVM || ARM64_VHE
>   help
> The Scalable Vector Extension (SVE) is an extension to the AArch64
> execution state which complements and extends the SIMD functionality
> @@ -1705,12 +1704,6 @@ config ARM64_SVE
> booting the kernel.  If unsure and you are not observing these
> symptoms, you should assume that it is safe to say Y.
>  
> -   CPUs that support SVE are architecturally required to support the
> -   Virtualization Host Extensions (VHE), so the kernel makes no
> -   provision for supporting SVE alongside KVM without VHE enabled.
> -   Thus, you will need to enable CONFIG_ARM64_VHE if you want to support
> -   KVM in the same kernel image.
> -
>  config ARM64_MODULE_PLTS
>   bool "Use PLTs to allow module memory to spill over into vmalloc area"
>   depends on MODULES
> diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
> index bec5f14b622a..526d69f3eeb3 100644
> --- a/arch/arm64/include/asm/fpsimd.h
> +++ b/arch/arm64/include/asm/fpsimd.h
> @@ -69,6 +69,12 @@ static inline void *sve_pffr(struct thread_struct *thread)
>  extern void sve_save_state(void *state, u32 *pfpsr);
>  extern void sve_load_state(void const *state, u32 const *pfpsr,
>  unsigned long vq_minus_1);
> +/*
> + * sve_load_state_nvhe function for the hyp code where the SVE registers are
> + * handled from the EL2, vector length is governed by ZCR_EL2.
> + */
> +extern void sve_load_state_nvhe(void const *state, u32 const *pfpsr,
> +unsigned long vq_minus_1);
>  extern void sve_flush_live(void);
>  extern void sve_load_from_fpsimd_state(struct user_fpsimd_state const *state,
>  unsigned long vq_minus_1);
> diff --git a/arch/arm64/include/asm/fpsimdmacros.h 
> b/arch/arm64/include/asm/fpsimdmacros.h
> index af43367534c7..d309c6071bce 100644
> --- a/arch/arm64/include/asm/fpsimdmacros.h
> +++ b/arch/arm64/include/asm/fpsimdmacros.h
> @@ -205,6 +205,17 @@
>  921:
>  .endm
>  
> +/* Update ZCR_EL2.LEN with the new VQ */
> +.macro sve_load_vq_nvhe xvqminus1, xtmp, xtmp2
> + mrs_s   \xtmp, SYS_ZCR_EL2
> + bic \xtmp2, \xtmp, ZCR_ELx_LEN_MASK
> + orr \xtmp2, \xtmp2, \xvqminus1
> + cmp \xtmp2, \xtmp
> + b.eq922f
> + msr_s   SYS_ZCR_EL2, \xtmp2 //self-synchronising
> +922:
> +.endm
> +

This looks a little better, but can we just give sve_load_vq an extra
argument, say

.macro sve_load_vq ... , el=EL1

[...]

> +.macro sve_load_nvhe nxbase, xpfpsr, xvqminus1, nxtmp, xtmp2
> + sve_load_vq_nvhe\xvqminus1, x\nxtmp, \xtmp2

and do sve_load_vq \xvqminus1, x\nxtmp, \xtmp2, EL2

> + _sve_load\nxbase, \xpfpsr, \nxtmp
> +.endm
> diff --git a/arch/arm64/include/asm/kvm_host.h 
> b/arch/arm64/include/asm/kvm_host.h
> index 8fcfab0c2567..11a058c81c1d 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -376,6 +376,10 @@ struct kvm_vcpu_arch {
>  #define vcpu_sve_pffr(vcpu) ((void *)((char *)((vcpu)->arch.sve_state) + \
> sve_ffr_offset((vcpu)->arch.sve_max_vl)))
>  
> +#define vcpu_sve_pffr_hyp(vcpu) ((void *)((char *) \
> + (kern_hyp_va((vcpu)->arch.sve_state)) + \
> + 

Re: [PATCH] kvm: arm64: Add SVE support for nVHE.

2021-01-26 Thread Dave Martin
On Fri, Jan 22, 2021 at 06:21:21PM +, Marc Zyngier wrote:
> Daniel,
> 
> Please consider cc'ing the maintainer (me) as well as the KVM/arm64
> reviewers (Julien, James, Suzuki) and the kvmarm list (all now Cc'd).
> 
> On 2021-01-22 01:07, Daniel Kiss wrote:
> >CPUs that support SVE are architecturally required to support the
> >Virtualization Host Extensions (VHE), so far the kernel supported
> >SVE alongside KVM with VHE enabled. In same cases it is desired to
> >run nVHE config even when VHE is available.
> >This patch add support for SVE for nVHE configuration too.
> >
> >In case of nVHE the system registers behave a bit differently.
> >ZCR_EL2 defines the maximum vector length that could be set in ZCR_EL1
> >effectively. To limit the vector length for the guest the ZCR_EL2 need
> >to be set accordingly therefore it become part of the context.
> 
> Not really. It's just part of the *hypervisor* state for this guest,
> and not part of the guest state. Not different from HCR_EL2, for example.

Also, ZCR_EL2 doesn't affect what can be written in ZCR_EL1, so this
might be reworded to say that it just limits the effective vector length
available to the guest.

> >The sve_state will be loaded in EL2 so it need to be mapped and during
> >the load ZCR_EL2 will control the vector length.
> >Trap control is similar to the VHE case except the bit values are the
> >opposite. ZCR_EL1 access trapping with VHE is ZEN value 0 but in case of
> >nVHE the TZ need to be set 1 to trigger the exception. Trap control need
> >to be respected during the context switch even in EL2.
> 
> Isn't that exactly the same as FPSIMD accesses?

(Yes, this isn't really new.  It might be best to let the code speak for
itself on that point, rather than trying to explain it in the commit
message.)

> 
> >
> >Tested on FVP with a Linux guest VM that run with a different VL than
> >the host system.
> >
> >This patch requires sve_set_vq from
> > - arm64/sve: Rework SVE trap access to minimise memory access
> 
> Care to add a pointer to this patch? This also shouldn't be part
> of the commit message. When you repost it, please include the
> other patch as a series unless it has already been merged by then.
> 
> >
> >Signed-off-by: Daniel Kiss 
> >---
> > arch/arm64/Kconfig |  7 
> > arch/arm64/include/asm/fpsimd.h|  4 ++
> > arch/arm64/include/asm/fpsimdmacros.h  | 38 +
> > arch/arm64/include/asm/kvm_host.h  | 19 +++--
> > arch/arm64/kernel/fpsimd.c | 11 +
> > arch/arm64/kvm/arm.c   |  5 ---
> > arch/arm64/kvm/fpsimd.c| 22 +++---
> > arch/arm64/kvm/hyp/fpsimd.S| 15 +++
> > arch/arm64/kvm/hyp/include/hyp/switch.h| 48 --
> > arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 26 
> > arch/arm64/kvm/hyp/nvhe/switch.c   |  6 ++-
> > arch/arm64/kvm/reset.c |  8 ++--
> > 12 files changed, 153 insertions(+), 56 deletions(-)
> >
> >diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> >index a6b5b7ef40ae..f17ab095e99f 100644
> >--- a/arch/arm64/Kconfig
> >+++ b/arch/arm64/Kconfig
> >@@ -1692,7 +1692,6 @@ endmenu
> > config ARM64_SVE
> > bool "ARM Scalable Vector Extension support"
> > default y
> >-depends on !KVM || ARM64_VHE
> > help
> >   The Scalable Vector Extension (SVE) is an extension to the AArch64
> >   execution state which complements and extends the SIMD functionality
> >@@ -1721,12 +1720,6 @@ config ARM64_SVE
> >   booting the kernel.  If unsure and you are not observing these
> >   symptoms, you should assume that it is safe to say Y.
> >
> >-  CPUs that support SVE are architecturally required to support the
> >-  Virtualization Host Extensions (VHE), so the kernel makes no
> >-  provision for supporting SVE alongside KVM without VHE enabled.
> >-  Thus, you will need to enable CONFIG_ARM64_VHE if you want to support
> >-  KVM in the same kernel image.
> >-
> > config ARM64_MODULE_PLTS
> > bool "Use PLTs to allow module memory to spill over into vmalloc area"
> > depends on MODULES
> >diff --git a/arch/arm64/include/asm/fpsimd.h
> >b/arch/arm64/include/asm/fpsimd.h
> >index e60aa4ebb351..e7889f4c5cef 100644
> >--- a/arch/arm64/include/asm/fpsimd.h
> >+++ b/arch/arm64/include/asm/fpsimd.h
> >@@ -69,6 +69,10 @@ static inline void *sve_pffr(struct thread_struct
> >*thread)
> > extern void sve_save_state(void *state, u32 *pfpsr);
> > extern void sve_load_state(void const *state, u32 const *pfpsr,
> >unsigned long vq_minus_1);
> >+extern void sve_save_state_nvhe(void *state, u32 *pfpsr,
> >+   unsigned long vq_minus_1);
> >+extern void sve_load_state_nvhe(void const *state, u32 const *pfpsr,
> >+   unsigned long vq_minus_1);
> 
> Why do we need two different entry points?
> 
> > extern void 

Re: [PATCH v3 0/4] KVM: arm64: Fix get-reg-list regression

2020-11-10 Thread Dave Martin
On Thu, Nov 05, 2020 at 10:10:18AM +0100, Andrew Jones wrote:
> 张东旭  reported a regression seen with CentOS
> when migrating from an old kernel to a new one. The problem was
> that QEMU rejected the migration since KVM_GET_REG_LIST reported
> a register was missing on the destination. Extra registers are OK
> on the destination, but not missing ones. The regression reproduces
> with upstream kernels when migrating from a 4.15 or later kernel,
> up to one with commit 73433762fcae ("KVM: arm64/sve: System register
> context switch and access support"), to a kernel that includes that
> commit, e.g. the latest mainline (5.10-rc2).
> 
> The first patch of this series is the fix. The next two patches,
> which don't have any intended functional changes, allow ID_SANITISED
> to be used for registers that flip between exposing features and
> being RAZ, which allows some code to be removed.
> 
> v3:
>  - Improve commit messages [Dave]
>  - Add new patch to consolidate REG_HIDDEN* flags [Dave]
> 
> v2:
>  - CC stable [Marc]
>  - Only one RAZ flag is enough [Marc]
>  - Move id_visibility() up by read_id_reg() since they'll likely
>be maintained together [drew]
> 
> 
> Andrew Jones (4):
>   KVM: arm64: Don't hide ID registers from userspace
>   KVM: arm64: Consolidate REG_HIDDEN_GUEST/USER
>   KVM: arm64: Check RAZ visibility in ID register accessors
>   KVM: arm64: Remove AA64ZFR0_EL1 accessors
> 
>  arch/arm64/kvm/sys_regs.c | 108 --
>  arch/arm64/kvm/sys_regs.h |  16 +++---
>  2 files changed, 41 insertions(+), 83 deletions(-)

Thanks for the updates.

Looks like I missed the opportunity to review this, but just for the
record (even if it doesn't appear in the tree):

Reviewed-by: Dave Martin 

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v2 0/3] KVM: arm64: Fix get-reg-list regression

2020-11-04 Thread Dave Martin
On Tue, Nov 03, 2020 at 02:52:44PM +0100, Andrew Jones wrote:
> On Tue, Nov 03, 2020 at 11:37:27AM +0000, Dave Martin wrote:
> > On Mon, Nov 02, 2020 at 07:50:34PM +0100, Andrew Jones wrote:
> > > 张东旭  reported a regression seen with CentOS
> > > when migrating from an old kernel to a new one. The problem was
> > > that QEMU rejected the migration since KVM_GET_REG_LIST reported
> > > a register was missing on the destination. Extra registers are OK
> > > on the destination, but not missing ones. The regression reproduces
> > > with upstream kernels when migrating from a 4.15 or later kernel,
> > > up to one with commit 73433762fcae ("KVM: arm64/sve: System register
> > > context switch and access support"), to a kernel that includes that
> > > commit, e.g. the latest mainline (5.10-rc2).
> > > 
> > > The first patch of this series is the fix. The next two patches,
> > > which don't have any intended functional changes, allow ID_SANITISED
> > > to be used for registers that flip between exposing features and
> > > being RAZ, which allows some code to be removed.
> > 
> > Is it worth updating Documentation/virt/kvm/api.rst to clarify the
> > expected use during VM migrations, and the guarantees that are expected
> > to hold between migratable kernel versions?  Currently the specification
> > is a mixture of "surely it's obvious" and "whatever makes QEMU work".
> > 
> > I guess that caught me out, but I'll let others judge whether other
> > people are likely to get similarly confused.
> >
> 
> I'm not sure what section this would fit in in api.rst. It feels like
> this should be a higher level document that covers the migration
> guarantees of the API in general. Of course, with host cpu passthrough,
> nothing is really guaranteed. The upgrade path is reasonable and probably
> doable though.

I agree that QEMU is the documentation in practice :P

This may be a situation where strategic vagueness is the best policy,
since in practice people attempting migration will always rely on more
than we can stricly guarantee in the generic API.  The generic rule is
probably "knock yourself out, YMMV".

If there's no clear place to write something up, then I guess we are at
least not making things worse.

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v2 3/3] KVM: arm64: Remove AA64ZFR0_EL1 accessors

2020-11-04 Thread Dave Martin
On Tue, Nov 03, 2020 at 02:46:40PM +0100, Andrew Jones wrote:
> On Tue, Nov 03, 2020 at 11:32:08AM +0000, Dave Martin wrote:
> > On Mon, Nov 02, 2020 at 07:50:37PM +0100, Andrew Jones wrote:
> > > The AA64ZFR0_EL1 accessors are just the general accessors with
> > > its visibility function open-coded. It also skips the if-else
> > > chain in read_id_reg, but there's no reason not to go there.
> > > Indeed consolidating ID register accessors and removing lines
> > > of code make it worthwhile.
> > > 
> > > No functional change intended.
> > 
> > Nit: No statement of what the patch does.
> 
> I can duplicate the summary in the commit message?

Generally, yes, though there is the opportunity to restore the missing
words and make a proper sentence out of it.  See my response to patch 2.

> > 
> > > Signed-off-by: Andrew Jones 
> > > ---
> > >  arch/arm64/kvm/sys_regs.c | 61 +++
> > >  1 file changed, 11 insertions(+), 50 deletions(-)
> > > 
> > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > > index b8822a20b1ea..e2d6fb83280e 100644
> > > --- a/arch/arm64/kvm/sys_regs.c
> > > +++ b/arch/arm64/kvm/sys_regs.c
> > > @@ -1156,6 +1156,16 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu,
> > >  static unsigned int id_visibility(const struct kvm_vcpu *vcpu,
> > > const struct sys_reg_desc *r)
> > >  {
> > > + u32 id = sys_reg((u32)r->Op0, (u32)r->Op1,
> > > +  (u32)r->CRn, (u32)r->CRm, (u32)r->Op2);
> > > +
> > > + switch (id) {
> > > + case SYS_ID_AA64ZFR0_EL1:
> > > + if (!vcpu_has_sve(vcpu))
> > > + return REG_RAZ;
> > > + break;
> > > + }
> > > +
> > 
> > This should work, but I'm not sure it's preferable to giving affected
> > registers their own visibility check function.
> > 
> > Multiplexing all the ID regs through this one checker function will
> > introduce a bit of overhead for always-non-RAZ ID regs, but I'd guess
> > the impact is negligible given the other overheads on these paths.
> 
> Yes, my though was that a switch isn't going to generate much overhead
> and consolidating the ID registers cleans things up a bit.

Well, no.  I don't have a particularly strong view on this.

The style of the code is being pulled in multiple directions in this
file already, so this doesn't introduce a new inconsistency as such.

If the number of registers handled in here becomes large then we might
want to review the situation again.

> 
> > 
> > >   return 0;
> > >  }
> > >  
> > > @@ -1203,55 +1213,6 @@ static unsigned int sve_visibility(const struct 
> > > kvm_vcpu *vcpu,
> > >   return REG_HIDDEN_USER | REG_HIDDEN_GUEST;
> > >  }
> > >  
> > > -/* Generate the emulated ID_AA64ZFR0_EL1 value exposed to the guest */
> > > -static u64 guest_id_aa64zfr0_el1(const struct kvm_vcpu *vcpu)
> > > -{
> > > - if (!vcpu_has_sve(vcpu))
> > > - return 0;
> > > -
> > > - return read_sanitised_ftr_reg(SYS_ID_AA64ZFR0_EL1);
> > > -}
> > > -
> > > -static bool access_id_aa64zfr0_el1(struct kvm_vcpu *vcpu,
> > > -struct sys_reg_params *p,
> > > -const struct sys_reg_desc *rd)
> > > -{
> > > - if (p->is_write)
> > > - return write_to_read_only(vcpu, p, rd);
> > > -
> > > - p->regval = guest_id_aa64zfr0_el1(vcpu);
> > > - return true;
> > > -}
> > > -
> > > -static int get_id_aa64zfr0_el1(struct kvm_vcpu *vcpu,
> > > - const struct sys_reg_desc *rd,
> > > - const struct kvm_one_reg *reg, void __user *uaddr)
> > > -{
> > > - u64 val;
> > > -
> > > - val = guest_id_aa64zfr0_el1(vcpu);
> > > - return reg_to_user(uaddr, , reg->id);
> > > -}
> > > -
> > > -static int set_id_aa64zfr0_el1(struct kvm_vcpu *vcpu,
> > > - const struct sys_reg_desc *rd,
> > > - const struct kvm_one_reg *reg, void __user *uaddr)
> > > -{
> > > - const u64 id = sys_reg_to_index(rd);
> > > - int err;
> > > - u64 val;
> > > -
> > > - err = reg_from_user(, uaddr, id);
> > > - if (err)
> > > - return err;
> > > -
> > > - /* This is what we mean by invariant: you can't change it. */
> &g

Re: [PATCH v2 2/3] KVM: arm64: Check RAZ visibility in ID register accessors

2020-11-04 Thread Dave Martin
On Tue, Nov 03, 2020 at 02:38:36PM +0100, Andrew Jones wrote:
> On Tue, Nov 03, 2020 at 11:23:54AM +0000, Dave Martin wrote:
> > On Mon, Nov 02, 2020 at 07:50:36PM +0100, Andrew Jones wrote:
> > > The instruction encodings of ID registers are preallocated. Until an
> > > encoding is assigned a purpose the register is RAZ. KVM's general ID
> > > register accessor functions already support both paths, RAZ or not.
> > > If for each ID register we can determine if it's RAZ or not, then all
> > > ID registers can build on the general functions. The register visibility
> > > function allows us to check whether a register should be completely
> > > hidden or not, extending it to also report when the register should
> > > be RAZ or not allows us to use it for ID registers as well.
> > 
> > Nit: no statement of what the patch does.
> 
> Hmm, I'm not sure what "...extending it to also report when the register
> should be RAZ or not allows us to use it for ID registers as well." is
> missing, other than spelling out that a new flag is being added for the
> extension. Please provide a suggestion.

Well, that's a subordinate clause, not a statement.  The containing
sentence is a statement about the _implications_ of doing it, but
nothing says that it is actually done.

Often, a less condensed repeat of the subject line is enough, say,
something like the following, as a separate paragraph at the end:

Check for RAZ visibility in the ID register accessor functions.

(Or rather, there should be a concise statement in the commit message
saying what the patch does, and the subject line should be a suitably
condensed version of _that_.)

You might want to add a simple statement of what is achieved:

This allows the RAZ case to be handled in a generic way
for all system registers.

That makes the intention and value of the patch easy to spot, while the
wordy paragraph is available for anyone who wants to understand the
background and rationale in more detail.

(This is just my view, but I think it's generally helpful to reviewers
to follow this rough pattern -- it makes it easy to skip non-critical
parts of the description and come back to them later on.  I might
propose edits in submitting-patches.rst to make this clearer -- and if
they are shot down in flames then I will shut up ;)

> 
> > 
> > You might want to point out that the introduced REG_RAZ functionality is
> > intentionally not used in this patch.
> 
> OK
> 
> > 
> > > No functional change intended.
> > > 
> > > Signed-off-by: Andrew Jones 
> > > ---
> > >  arch/arm64/kvm/sys_regs.c | 19 ---
> > >  arch/arm64/kvm/sys_regs.h | 10 ++
> > >  2 files changed, 26 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > > index 6ff0c15531ca..b8822a20b1ea 100644
> > > --- a/arch/arm64/kvm/sys_regs.c
> > > +++ b/arch/arm64/kvm/sys_regs.c
> > > @@ -1153,6 +1153,12 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu,
> > >   return val;
> > >  }
> > >  
> > > +static unsigned int id_visibility(const struct kvm_vcpu *vcpu,
> > > +   const struct sys_reg_desc *r)
> > > +{
> > > + return 0;
> > > +}
> > > +
> > >  /* cpufeature ID register access trap handlers */
> > >  
> > >  static bool __access_id_reg(struct kvm_vcpu *vcpu,
> > > @@ -1171,7 +1177,9 @@ static bool access_id_reg(struct kvm_vcpu *vcpu,
> > > struct sys_reg_params *p,
> > > const struct sys_reg_desc *r)
> > >  {
> > > - return __access_id_reg(vcpu, p, r, false);
> > > + bool raz = sysreg_visible_as_raz(vcpu, r);
> > > +
> > > + return __access_id_reg(vcpu, p, r, raz);
> > >  }
> > >  
> > >  static bool access_raz_id_reg(struct kvm_vcpu *vcpu,
> > > @@ -1283,13 +1291,17 @@ static int __set_id_reg(const struct kvm_vcpu 
> > > *vcpu,
> > >  static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc 
> > > *rd,
> > > const struct kvm_one_reg *reg, void __user *uaddr)
> > >  {
> > > - return __get_id_reg(vcpu, rd, uaddr, false);
> > > + bool raz = sysreg_visible_as_raz(vcpu, rd);
> > > +
> > > + return __get_id_reg(vcpu, rd, uaddr, raz);
> > >  }
> > >  
> > >  static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc 
> > > *rd,
> > > const struct kvm_on

Re: [PATCH v2 1/3] KVM: arm64: Don't hide ID registers from userspace

2020-11-04 Thread Dave Martin
On Tue, Nov 03, 2020 at 02:32:15PM +0100, Andrew Jones wrote:
> On Tue, Nov 03, 2020 at 11:18:19AM +0000, Dave Martin wrote:
> > On Mon, Nov 02, 2020 at 07:50:35PM +0100, Andrew Jones wrote:
> > > ID registers are RAZ until they've been allocated a purpose, but
> > > that doesn't mean they should be removed from the KVM_GET_REG_LIST
> > > list. So far we only have one register, SYS_ID_AA64ZFR0_EL1, that
> > > is hidden from userspace when its function is not present. Removing
> > > the userspace visibility checks is enough to reexpose it, as it
> > > already behaves as RAZ when the function is not present.
> > 
> > Pleae state what the patch does.  (The subject line serves as a summary
> > of that, but the commit message should make sense without it.)
> 
> I don't like "This patch ..." type of sentences in the commit message,
> but unless you have a better suggestion, then I'd just add a sentence
> like
> 
> "This patch ensures we still expose sysreg '3, 0, 0, 4, 4'
> (ID_AA64ZFR0_EL1) to userspace as RAZ when SVE is not implemented."

I'm not sure the sysreg encoding number is really needed.
submitting-patches.rst also says such statements should be in the
imperative.  Why not delete the "Removing the userspace visibility
checks..." sentence above and writing:

Expose ID_AA64ZFR0_EL1 to userspace as RAZ when SVE is not
implemented.

Removing the userspace visibility checks is enough to reexpose it,
as it already behaves as RAZ for the guest when SVE is not present.

(The background to this gripe is that "traditional" mailers may invoke a
standalone editor on the message body when composing reply, so the
subject line may not be visible...)

> 
> > 
> > Also, how exactly !vcpu_has_sve() causes ID_AA64ZFR0_EL1 to behave as
> > RAZ with this change?  (I'm not saying it doesn't, but the code is not
> > trivial to follow...)
> 
> guest_id_aa64zfr0_el1() returns zero for the register when !vcpu_has_sve(),
> and all the accessors (userspace and guest) build on it.
> 
> I'm not sure how helpful it would be to add that sentence to the commit
> message though.

No worries, I don't think you need to add anthing.  I figured out how
this works after my previously reply, so you can put my question down to
me being slow on the uptake...

> 
> > 
> > > 
> > > Fixes: 73433762fcae ("KVM: arm64/sve: System register context switch and 
> > > access support")
> > > Cc:  # v5.2+
> > > Reported-by: 张东旭 
> > > Signed-off-by: Andrew Jones 
> > > ---
> > >  arch/arm64/kvm/sys_regs.c | 18 +-
> > >  1 file changed, 1 insertion(+), 17 deletions(-)
> > > 
> > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > > index fb12d3ef423a..6ff0c15531ca 100644
> > > --- a/arch/arm64/kvm/sys_regs.c
> > > +++ b/arch/arm64/kvm/sys_regs.c
> > > @@ -1195,16 +1195,6 @@ static unsigned int sve_visibility(const struct 
> > > kvm_vcpu *vcpu,
> > >   return REG_HIDDEN_USER | REG_HIDDEN_GUEST;
> > >  }
> > >  
> > > -/* Visibility overrides for SVE-specific ID registers */
> > > -static unsigned int sve_id_visibility(const struct kvm_vcpu *vcpu,
> > > -   const struct sys_reg_desc *rd)
> > > -{
> > > - if (vcpu_has_sve(vcpu))
> > > - return 0;
> > > -
> > > - return REG_HIDDEN_USER;
> > 
> > In light of this change, I think that REG_HIDDEN_GUEST and
> > REG_HIDDEN_USER are always either both set or both clear.  Given the
> > discussion on this issue, I'm thinking it probably doesn't even make
> > sense for these to be independent (?)
> > 
> > If REG_HIDDEN_USER is really redundant, I suggest stripping it out and
> > simplifying the code appropriately.
> > 
> > (In effect, I think your RAZ flag will do correctly what REG_HIDDEN_USER
> > was trying to achieve.)
> 
> We could consolidate REG_HIDDEN_GUEST and REG_HIDDEN_USER into REG_HIDDEN,
> which ZCR_EL1 and ptrauth registers will still use.

Sounds good to me.  Getting rid of _both_ the old names well help avoid
accidents too.

[...]

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v2 0/3] KVM: arm64: Fix get-reg-list regression

2020-11-03 Thread Dave Martin
On Mon, Nov 02, 2020 at 07:50:34PM +0100, Andrew Jones wrote:
> 张东旭  reported a regression seen with CentOS
> when migrating from an old kernel to a new one. The problem was
> that QEMU rejected the migration since KVM_GET_REG_LIST reported
> a register was missing on the destination. Extra registers are OK
> on the destination, but not missing ones. The regression reproduces
> with upstream kernels when migrating from a 4.15 or later kernel,
> up to one with commit 73433762fcae ("KVM: arm64/sve: System register
> context switch and access support"), to a kernel that includes that
> commit, e.g. the latest mainline (5.10-rc2).
> 
> The first patch of this series is the fix. The next two patches,
> which don't have any intended functional changes, allow ID_SANITISED
> to be used for registers that flip between exposing features and
> being RAZ, which allows some code to be removed.

Is it worth updating Documentation/virt/kvm/api.rst to clarify the
expected use during VM migrations, and the guarantees that are expected
to hold between migratable kernel versions?  Currently the specification
is a mixture of "surely it's obvious" and "whatever makes QEMU work".

I guess that caught me out, but I'll let others judge whether other
people are likely to get similarly confused.

[...]

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v2 3/3] KVM: arm64: Remove AA64ZFR0_EL1 accessors

2020-11-03 Thread Dave Martin
On Mon, Nov 02, 2020 at 07:50:37PM +0100, Andrew Jones wrote:
> The AA64ZFR0_EL1 accessors are just the general accessors with
> its visibility function open-coded. It also skips the if-else
> chain in read_id_reg, but there's no reason not to go there.
> Indeed consolidating ID register accessors and removing lines
> of code make it worthwhile.
> 
> No functional change intended.

Nit: No statement of what the patch does.

> Signed-off-by: Andrew Jones 
> ---
>  arch/arm64/kvm/sys_regs.c | 61 +++
>  1 file changed, 11 insertions(+), 50 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index b8822a20b1ea..e2d6fb83280e 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1156,6 +1156,16 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu,
>  static unsigned int id_visibility(const struct kvm_vcpu *vcpu,
> const struct sys_reg_desc *r)
>  {
> + u32 id = sys_reg((u32)r->Op0, (u32)r->Op1,
> +  (u32)r->CRn, (u32)r->CRm, (u32)r->Op2);
> +
> + switch (id) {
> + case SYS_ID_AA64ZFR0_EL1:
> + if (!vcpu_has_sve(vcpu))
> + return REG_RAZ;
> + break;
> + }
> +

This should work, but I'm not sure it's preferable to giving affected
registers their own visibility check function.

Multiplexing all the ID regs through this one checker function will
introduce a bit of overhead for always-non-RAZ ID regs, but I'd guess
the impact is negligible given the other overheads on these paths.

>   return 0;
>  }
>  
> @@ -1203,55 +1213,6 @@ static unsigned int sve_visibility(const struct 
> kvm_vcpu *vcpu,
>   return REG_HIDDEN_USER | REG_HIDDEN_GUEST;
>  }
>  
> -/* Generate the emulated ID_AA64ZFR0_EL1 value exposed to the guest */
> -static u64 guest_id_aa64zfr0_el1(const struct kvm_vcpu *vcpu)
> -{
> - if (!vcpu_has_sve(vcpu))
> - return 0;
> -
> - return read_sanitised_ftr_reg(SYS_ID_AA64ZFR0_EL1);
> -}
> -
> -static bool access_id_aa64zfr0_el1(struct kvm_vcpu *vcpu,
> -struct sys_reg_params *p,
> -const struct sys_reg_desc *rd)
> -{
> - if (p->is_write)
> - return write_to_read_only(vcpu, p, rd);
> -
> - p->regval = guest_id_aa64zfr0_el1(vcpu);
> - return true;
> -}
> -
> -static int get_id_aa64zfr0_el1(struct kvm_vcpu *vcpu,
> - const struct sys_reg_desc *rd,
> - const struct kvm_one_reg *reg, void __user *uaddr)
> -{
> - u64 val;
> -
> - val = guest_id_aa64zfr0_el1(vcpu);
> - return reg_to_user(uaddr, , reg->id);
> -}
> -
> -static int set_id_aa64zfr0_el1(struct kvm_vcpu *vcpu,
> - const struct sys_reg_desc *rd,
> - const struct kvm_one_reg *reg, void __user *uaddr)
> -{
> - const u64 id = sys_reg_to_index(rd);
> - int err;
> - u64 val;
> -
> - err = reg_from_user(, uaddr, id);
> - if (err)
> - return err;
> -
> - /* This is what we mean by invariant: you can't change it. */
> - if (val != guest_id_aa64zfr0_el1(vcpu))
> - return -EINVAL;
> -
> - return 0;
> -}
> -
>  /*
>   * cpufeature ID register user accessors
>   *
> @@ -1515,7 +1476,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>   ID_SANITISED(ID_AA64PFR1_EL1),
>   ID_UNALLOCATED(4,2),
>   ID_UNALLOCATED(4,3),
> - { SYS_DESC(SYS_ID_AA64ZFR0_EL1), access_id_aa64zfr0_el1, .get_user = 
> get_id_aa64zfr0_el1, .set_user = set_id_aa64zfr0_el1, },
> + ID_SANITISED(ID_AA64ZFR0_EL1),

If keeping a dedicated helper, we could have a special macro for that, say

ID_SANITISED_VISIBILITY(ID_AA64ZFR0_EL1, id_aa64zfr0_el1_visibility)

[...]

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v2 2/3] KVM: arm64: Check RAZ visibility in ID register accessors

2020-11-03 Thread Dave Martin
On Mon, Nov 02, 2020 at 07:50:36PM +0100, Andrew Jones wrote:
> The instruction encodings of ID registers are preallocated. Until an
> encoding is assigned a purpose the register is RAZ. KVM's general ID
> register accessor functions already support both paths, RAZ or not.
> If for each ID register we can determine if it's RAZ or not, then all
> ID registers can build on the general functions. The register visibility
> function allows us to check whether a register should be completely
> hidden or not, extending it to also report when the register should
> be RAZ or not allows us to use it for ID registers as well.

Nit: no statement of what the patch does.

You might want to point out that the introduced REG_RAZ functionality is
intentionally not used in this patch.

> No functional change intended.
> 
> Signed-off-by: Andrew Jones 
> ---
>  arch/arm64/kvm/sys_regs.c | 19 ---
>  arch/arm64/kvm/sys_regs.h | 10 ++
>  2 files changed, 26 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 6ff0c15531ca..b8822a20b1ea 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1153,6 +1153,12 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu,
>   return val;
>  }
>  
> +static unsigned int id_visibility(const struct kvm_vcpu *vcpu,
> +   const struct sys_reg_desc *r)
> +{
> + return 0;
> +}
> +
>  /* cpufeature ID register access trap handlers */
>  
>  static bool __access_id_reg(struct kvm_vcpu *vcpu,
> @@ -1171,7 +1177,9 @@ static bool access_id_reg(struct kvm_vcpu *vcpu,
> struct sys_reg_params *p,
> const struct sys_reg_desc *r)
>  {
> - return __access_id_reg(vcpu, p, r, false);
> + bool raz = sysreg_visible_as_raz(vcpu, r);
> +
> + return __access_id_reg(vcpu, p, r, raz);
>  }
>  
>  static bool access_raz_id_reg(struct kvm_vcpu *vcpu,
> @@ -1283,13 +1291,17 @@ static int __set_id_reg(const struct kvm_vcpu *vcpu,
>  static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
> const struct kvm_one_reg *reg, void __user *uaddr)
>  {
> - return __get_id_reg(vcpu, rd, uaddr, false);
> + bool raz = sysreg_visible_as_raz(vcpu, rd);
> +
> + return __get_id_reg(vcpu, rd, uaddr, raz);
>  }
>  
>  static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
> const struct kvm_one_reg *reg, void __user *uaddr)
>  {
> - return __set_id_reg(vcpu, rd, uaddr, false);
> + bool raz = sysreg_visible_as_raz(vcpu, rd);
> +
> + return __set_id_reg(vcpu, rd, uaddr, raz);
>  }
>  
>  static int get_raz_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc 
> *rd,
> @@ -1381,6 +1393,7 @@ static bool access_mte_regs(struct kvm_vcpu *vcpu, 
> struct sys_reg_params *p,
>   .access = access_id_reg,\
>   .get_user = get_id_reg, \
>   .set_user = set_id_reg, \
> + .visibility = id_visibility,\

This is just the default for ID_SANITISED, right?

>  }
>  
>  /*
> diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
> index 5a6fc30f5989..9d3ef7cfa116 100644
> --- a/arch/arm64/kvm/sys_regs.h
> +++ b/arch/arm64/kvm/sys_regs.h
> @@ -61,6 +61,7 @@ struct sys_reg_desc {
>  
>  #define REG_HIDDEN_USER  (1 << 0) /* hidden from userspace 
> ioctls */
>  #define REG_HIDDEN_GUEST (1 << 1) /* hidden from guest */
> +#define REG_RAZ  (1 << 2) /* RAZ from userspace ioctls 
> and guest */
>  
>  static __printf(2, 3)
>  inline void print_sys_reg_msg(const struct sys_reg_params *p,
> @@ -129,6 +130,15 @@ static inline bool sysreg_hidden_from_user(const struct 
> kvm_vcpu *vcpu,
>   return r->visibility(vcpu, r) & REG_HIDDEN_USER;
>  }
>  
> +static inline bool sysreg_visible_as_raz(const struct kvm_vcpu *vcpu,
> +  const struct sys_reg_desc *r)
> +{
> + if (likely(!r->visibility))
> + return false;
> +
> + return r->visibility(vcpu, r) & REG_RAZ;
> +}
> +

[...]

Looks reasonable, I think.

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v2 1/3] KVM: arm64: Don't hide ID registers from userspace

2020-11-03 Thread Dave Martin
On Mon, Nov 02, 2020 at 07:50:35PM +0100, Andrew Jones wrote:
> ID registers are RAZ until they've been allocated a purpose, but
> that doesn't mean they should be removed from the KVM_GET_REG_LIST
> list. So far we only have one register, SYS_ID_AA64ZFR0_EL1, that
> is hidden from userspace when its function is not present. Removing
> the userspace visibility checks is enough to reexpose it, as it
> already behaves as RAZ when the function is not present.

Pleae state what the patch does.  (The subject line serves as a summary
of that, but the commit message should make sense without it.)

Also, how exactly !vcpu_has_sve() causes ID_AA64ZFR0_EL1 to behave as
RAZ with this change?  (I'm not saying it doesn't, but the code is not
trivial to follow...)

> 
> Fixes: 73433762fcae ("KVM: arm64/sve: System register context switch and 
> access support")
> Cc:  # v5.2+
> Reported-by: 张东旭 
> Signed-off-by: Andrew Jones 
> ---
>  arch/arm64/kvm/sys_regs.c | 18 +-
>  1 file changed, 1 insertion(+), 17 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index fb12d3ef423a..6ff0c15531ca 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1195,16 +1195,6 @@ static unsigned int sve_visibility(const struct 
> kvm_vcpu *vcpu,
>   return REG_HIDDEN_USER | REG_HIDDEN_GUEST;
>  }
>  
> -/* Visibility overrides for SVE-specific ID registers */
> -static unsigned int sve_id_visibility(const struct kvm_vcpu *vcpu,
> -   const struct sys_reg_desc *rd)
> -{
> - if (vcpu_has_sve(vcpu))
> - return 0;
> -
> - return REG_HIDDEN_USER;

In light of this change, I think that REG_HIDDEN_GUEST and
REG_HIDDEN_USER are always either both set or both clear.  Given the
discussion on this issue, I'm thinking it probably doesn't even make
sense for these to be independent (?)

If REG_HIDDEN_USER is really redundant, I suggest stripping it out and
simplifying the code appropriately.

(In effect, I think your RAZ flag will do correctly what REG_HIDDEN_USER
was trying to achieve.)

> -}
> -
>  /* Generate the emulated ID_AA64ZFR0_EL1 value exposed to the guest */
>  static u64 guest_id_aa64zfr0_el1(const struct kvm_vcpu *vcpu)
>  {
> @@ -1231,9 +1221,6 @@ static int get_id_aa64zfr0_el1(struct kvm_vcpu *vcpu,
>  {
>   u64 val;
>  
> - if (WARN_ON(!vcpu_has_sve(vcpu)))
> - return -ENOENT;
> -
>   val = guest_id_aa64zfr0_el1(vcpu);
>   return reg_to_user(uaddr, , reg->id);
>  }
> @@ -1246,9 +1233,6 @@ static int set_id_aa64zfr0_el1(struct kvm_vcpu *vcpu,
>   int err;
>   u64 val;
>  
> - if (WARN_ON(!vcpu_has_sve(vcpu)))
> - return -ENOENT;
> -
>   err = reg_from_user(, uaddr, id);
>   if (err)
>   return err;
> @@ -1518,7 +1502,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>   ID_SANITISED(ID_AA64PFR1_EL1),
>   ID_UNALLOCATED(4,2),
>   ID_UNALLOCATED(4,3),
> - { SYS_DESC(SYS_ID_AA64ZFR0_EL1), access_id_aa64zfr0_el1, .get_user = 
> get_id_aa64zfr0_el1, .set_user = set_id_aa64zfr0_el1, .visibility = 
> sve_id_visibility },
> + { SYS_DESC(SYS_ID_AA64ZFR0_EL1), access_id_aa64zfr0_el1, .get_user = 
> get_id_aa64zfr0_el1, .set_user = set_id_aa64zfr0_el1, },
>   ID_UNALLOCATED(4,5),
>   ID_UNALLOCATED(4,6),
>   ID_UNALLOCATED(4,7),

Otherwise looks reasonable.

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: Kernel patch cases qemu live migration failed.

2020-10-19 Thread Dave Martin
On Mon, Oct 19, 2020 at 05:23:11PM +0200, Andrew Jones wrote:
> On Mon, Oct 19, 2020 at 03:58:40PM +0100, Dave Martin wrote:
> > On Mon, Oct 19, 2020 at 03:18:11PM +0100, Peter Maydell wrote:
> > > On Mon, 19 Oct 2020 at 14:40, Andrew Jones  wrote:
> > > >
> > > > On Mon, Oct 19, 2020 at 12:43:33PM +0100, Peter Maydell wrote:
> > > > > Well, ID regs are special in the architecture -- they always exist
> > > > > and must RAZ/WI, even if they're not actually given any fields yet.
> > > > > This is different from other "unused" parts of the system register
> > > > > encoding space, which UNDEF.
> > > >
> > > > Table D12-2 confirms the register should be RAZ, as it says the register
> > > > is "RO, but RAZ if SVE is not implemented". Does "RO" imply "WI", 
> > > > though?
> > > > For the guest we inject an exception on writes, and for userspace we
> > > > require the value to be preserved on write.
> > > 
> > > Sorry, I mis-spoke. They're RAZ, but not WI, just RO (which is to say
> > > they'll UNDEF if you try to write to them).
> > > 
> > > > I think we should follow the spec, even for userspace access, and be RAZ
> > > > for when the feature isn't implemented. As for writes, assuming the
> > > > exception injection is what we want for the guest (not WI), then that's
> > > > correct. For userspace, I think we should continue forcing preservation
> > > > (which will force preservation of zero when it's RAZ).
> > > 
> > > Yes, that sounds right.
> > 
> > [...]
> > 
> > > > > The problem is that you've actually removed registers from
> > > > > the list that were previously in it (because pre-SVE
> > > > > kernels put this ID register in the list as a RAZ/WI register,
> > > > > and now it's not in the list if SVE isn't supported).
> > 
> > Define "previously", though.  IIUC, the full enumeration was added in
> > v4.15 (with ID_AA64ZFR0_EL1 still not supported at all):
> > 
> > v4.15-rc1~110^2~27
> > 93390c0a1b20 ("arm64: KVM: Hide unsupported AArch64 CPU features from 
> > guests")
> > 
> > 
> > And then ID_AA64FZR0_EL1 was removed from the enumeration, also in
> > v4.15:
> > 
> > v4.15-rc1~110^2~5
> > 07d79fe7c223 ("arm64/sve: KVM: Hide SVE from CPU features exposed to 
> > guests")
> > 
> > 
> > So, are there really two upstram kernel tags that are mismatched on
> > this, or is this just a bisectability issue in v4.14..v4.15?
> > 
> > It's a while since I looked at this, and I may have misunderstood the
> > timeline.
> > 
> > 
> > > > > > So, I think that instead of changing the ID_AA64ZFR0_EL1 behaviour,
> > > > > > parhaps we should move all ID_UNALLOCATED() regs (and possibly
> > > > > > ID_HIDDEN(), not sure about that) to have REG_HIDDEN_USER 
> > > > > > visibility.
> > > > >
> > > > > What does this do as far as the user-facing list-of-registers
> > > > > is concerned? All these registers need to remain in the
> > > > > KVM_GET_REG_LIST list, or you break migration from an old
> > > > > kernel to a new one.
> > 
> > OK, I think I see where you are coming from, now.
> > 
> > It may make sense to get rid of the REG_HIDDEN_GUEST / REG_HIDDEN_USER
> > distinction, and provide the same visibility for userspace as for MSR/
> > MRS all the time.  This would restore ID_AA64ZFR0_EL1 into the userspace
> > view, and may also allow a bit of simplification in the code.
> > 
> > Won't this will still break migration from the resulting kernel to a
> > current kernel that hides ID_AA64ZFR0_EL1?  Or have I misunderstood
> > something.
> >
> 
> Yes, but, while neither direction old -> new nor new -> old is actually
> something that people should do when using host cpu passthrough (they
> should only ever migrate between identical hosts, both hardware and
> host kernel version), migrating from old -> new makes more sense, as
> that's the upgrade path, and it's more supportable - we can workaround
> things on the new side. So, long story short, new -> old will fail due
> to making this change, but it's still probably the right thing to do,
> as we'll be defining a better pattern for ID registers going forward,
> and we never claimed new -> old migrations would work anyway with host
> passthrough.
> 
> Thanks,
> drew

Ack, just wanted to make sure I understood the implications correctly.

I'm still not sure I fully understand why we hit this problem (i.e.,
ZFR0 enumeration mismatch between host and guest) in the first place,
unless I've misunderstood which patches make these changes, or unless
RHEL has cherry-picked odd patches that weren't intended to be applied
separately...

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: Kernel patch cases qemu live migration failed.

2020-10-19 Thread Dave Martin
On Mon, Oct 19, 2020 at 03:18:11PM +0100, Peter Maydell wrote:
> On Mon, 19 Oct 2020 at 14:40, Andrew Jones  wrote:
> >
> > On Mon, Oct 19, 2020 at 12:43:33PM +0100, Peter Maydell wrote:
> > > Well, ID regs are special in the architecture -- they always exist
> > > and must RAZ/WI, even if they're not actually given any fields yet.
> > > This is different from other "unused" parts of the system register
> > > encoding space, which UNDEF.
> >
> > Table D12-2 confirms the register should be RAZ, as it says the register
> > is "RO, but RAZ if SVE is not implemented". Does "RO" imply "WI", though?
> > For the guest we inject an exception on writes, and for userspace we
> > require the value to be preserved on write.
> 
> Sorry, I mis-spoke. They're RAZ, but not WI, just RO (which is to say
> they'll UNDEF if you try to write to them).
> 
> > I think we should follow the spec, even for userspace access, and be RAZ
> > for when the feature isn't implemented. As for writes, assuming the
> > exception injection is what we want for the guest (not WI), then that's
> > correct. For userspace, I think we should continue forcing preservation
> > (which will force preservation of zero when it's RAZ).
> 
> Yes, that sounds right.

[...]

> > > The problem is that you've actually removed registers from
> > > the list that were previously in it (because pre-SVE
> > > kernels put this ID register in the list as a RAZ/WI register,
> > > and now it's not in the list if SVE isn't supported).

Define "previously", though.  IIUC, the full enumeration was added in
v4.15 (with ID_AA64ZFR0_EL1 still not supported at all):

v4.15-rc1~110^2~27
93390c0a1b20 ("arm64: KVM: Hide unsupported AArch64 CPU features from guests")


And then ID_AA64FZR0_EL1 was removed from the enumeration, also in
v4.15:

v4.15-rc1~110^2~5
07d79fe7c223 ("arm64/sve: KVM: Hide SVE from CPU features exposed to guests")


So, are there really two upstram kernel tags that are mismatched on
this, or is this just a bisectability issue in v4.14..v4.15?

It's a while since I looked at this, and I may have misunderstood the
timeline.


> > > > So, I think that instead of changing the ID_AA64ZFR0_EL1 behaviour,
> > > > parhaps we should move all ID_UNALLOCATED() regs (and possibly
> > > > ID_HIDDEN(), not sure about that) to have REG_HIDDEN_USER visibility.
> > >
> > > What does this do as far as the user-facing list-of-registers
> > > is concerned? All these registers need to remain in the
> > > KVM_GET_REG_LIST list, or you break migration from an old
> > > kernel to a new one.

OK, I think I see where you are coming from, now.

It may make sense to get rid of the REG_HIDDEN_GUEST / REG_HIDDEN_USER
distinction, and provide the same visibility for userspace as for MSR/
MRS all the time.  This would restore ID_AA64ZFR0_EL1 into the userspace
view, and may also allow a bit of simplification in the code.

Won't this will still break migration from the resulting kernel to a
current kernel that hides ID_AA64ZFR0_EL1?  Or have I misunderstood
something.

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: Kernel patch cases qemu live migration failed.

2020-10-19 Thread Dave Martin
On Mon, Oct 19, 2020 at 11:25:25AM +0200, Andrew Jones wrote:
> On Thu, Oct 15, 2020 at 03:57:02PM +0100, Peter Maydell wrote:
> > On Thu, 15 Oct 2020 at 15:41, Andrew Jones  wrote:
> > > The reporter states neither the source nor destination hardware supports
> > > SVE. My guess is that what's happening is the reserved ID register
> > > ID_UNALLOCATED(4,4) was showing up in the KVM_GET_REG_LIST count on
> > > the old kernel, but the new kernel filters it out. Maybe it is a
> > > bug to filter it out of the count, as it's a reserved ID register and
> > > I suppose the other reserved ID registers are still showing up?
> > 
> > Yeah, RES0 ID registers should show up in the list, because otherwise
> > userspace has to annoyingly special case them when the architecture
> > eventually defines behaviour for them.
> > 
> > Dave's comment in the kernel commit message
> > # ID_AA64ZFR0_EL1 is RO-RAZ for MRS/MSR when SVE is disabled for the
> > # guest, but for compatibility with non-SVE aware KVM implementations
> > # the register should not be enumerated at all for KVM_GET_REG_LIST
> > # in this case.
> > seems wrong to me -- for compatibility the register should remain
> > present and behave as RAZ/WI if SVE is disabled in the guest,
> > the same way it was before the kernel/KVM knew about SVE at all.
> 
> Yup, I agree with you and I'll try writing a patch for this.
> 
> Thanks,
> drew

I'm not quite sure about Peter's assessment here.

I agree with the inconsistency identified here: we always enumerate all
unallocated ID regs, but we enumerate ID_AA64ZFR0_EL1 conditionally.
This doesn't feel right: on a non-SVE guest, ID_AA64ZFR0_EL1 should
behave exactly as an unallocated ID register.

I'm not sure about the proposed fix.

For one thing, I'm not sure that old hosts will accept writing of 0 to
arbitrary ID regs.  This may require some digging, but commit
93390c0a1b20 ("arm64: KVM: Hide unsupported AArch64 CPU features from guests")
may be the place to start.

My original idea was that at the source end we should be conservative:
enumerate and dump the minimum set of registers relevant to the
target -- for compatibility with old hosts that don't handle the
unallocated ID regs at all.  At the destination end, modern hosts
should be permissive, i.e., allow any ID reg to be set to 0, but don't
require the setting of any reg that older source hosts might not send.

So, I think that instead of changing the ID_AA64ZFR0_EL1 behaviour,
parhaps we should move all ID_UNALLOCATED() regs (and possibly
ID_HIDDEN(), not sure about that) to have REG_HIDDEN_USER visibility.

Thoughts?

---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v2 0/4] Manage vcpu flags from the host

2020-07-22 Thread Dave Martin
On Wed, Jul 22, 2020 at 05:36:34PM +0100, Marc Zyngier wrote:
> On 2020-07-22 17:24, Dave Martin wrote:
> >On Mon, Jul 13, 2020 at 10:05:01PM +0100, Andrew Scull wrote:
> >>The aim is to keep management of the flags in the host and out of hyp
> >>where possible. I find this makes it easier to understand how the flags
> >>are used as the responsibilities are clearly divided.
> >>
> >>The series applies on top of kvmarm/next after VHE and nVHE have been
> >>separated.
> >
> >(A commit ID would be useful for someone trying to the patches.)
> >
> >>From v1 <20200710095754.3641976-1-asc...@google.com>:
> >
> >(Nit: Is there some easy way of looking up mails by Message-ID?
> >
> >Otherwise, it can be helpful to have a mail archive URL here, e.g.,
> >lore.kernel.org)
> 
> I routinely use https://lore.kernel.org/r/, which does
> the right thing.  But indeed, including the link directly is
> the preferred course of action, and saves having to assemble
> the URL by hand.

Cool, I'll make a mental note.

Now I just need to start posting non-unique message-IDs ;)

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v2 3/4] KVM: arm64: Leave vcpu FPSIMD synchronization in host

2020-07-22 Thread Dave Martin
On Mon, Jul 13, 2020 at 10:05:04PM +0100, Andrew Scull wrote:

vv Nit: Message body doesn't say what changed _or_ why.  See comments on
patch 2.

> The task state can be checked by the host and the vcpu flags updated
> before calling into hyp. Hyp simply acts on the state provided to it by
> the host and updates it when switching to the vcpu state.

It would be useful here to explain the renaming of
kvm_arch_vcpu_ctxsync_fp().

> 
> Signed-off-by: Andrew Scull 
> ---
>  arch/arm64/include/asm/kvm_host.h   |  3 ++-
>  arch/arm64/kvm/arm.c|  4 +++-
>  arch/arm64/kvm/fpsimd.c | 19 ++-
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 19 ---
>  arch/arm64/kvm/hyp/nvhe/switch.c|  3 +--
>  arch/arm64/kvm/hyp/vhe/switch.c |  3 +--
>  6 files changed, 25 insertions(+), 26 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h 
> b/arch/arm64/include/asm/kvm_host.h
> index b06f24b5f443..1a062d44b395 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -616,7 +616,8 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
>  /* Guest/host FPSIMD coordination helpers */
>  int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
>  void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu);
> -void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu);
> +void kvm_arch_vcpu_sync_fp_before_run(struct kvm_vcpu *vcpu);
> +void kvm_arch_vcpu_sync_fp_after_run(struct kvm_vcpu *vcpu);
>  void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu);
>  
>  static inline bool kvm_pmu_counter_deferred(struct perf_event_attr *attr)
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 98f05bdac3c1..c91b0a66bf20 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -682,6 +682,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
>  
>   local_irq_disable();
>  
> + kvm_arch_vcpu_sync_fp_before_run(vcpu);
> +
>   kvm_vgic_flush_hwstate(vcpu);
>  
>   /*
> @@ -769,7 +771,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
>   if (static_branch_unlikely(_irqchip_in_use))
>   kvm_timer_sync_user(vcpu);
>  
> - kvm_arch_vcpu_ctxsync_fp(vcpu);
> + kvm_arch_vcpu_sync_fp_after_run(vcpu);
>  
>   /*
>* We may have taken a host interrupt in HYP mode (ie
> diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
> index c6b3197f6754..2779cc11f3dd 100644
> --- a/arch/arm64/kvm/fpsimd.c
> +++ b/arch/arm64/kvm/fpsimd.c
> @@ -88,13 +88,30 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
>   }
>  }
>  
> +void kvm_arch_vcpu_sync_fp_before_run(struct kvm_vcpu *vcpu)
> +{
> + WARN_ON_ONCE(!irqs_disabled());
> +
> + if (!system_supports_fpsimd())
> + return;
> +
> + /*
> +  * If the CPU's FP state is transient, there is no need to save the

See comments on patch 2 regarding "transient".

Beyond not needing to save the state, we must not even attempt to do so.

> +  * current state. Without further information, it must also be assumed
> +  * that the vcpu's state is not loaded.
> +  */
> + if (test_thread_flag(TIF_FOREIGN_FPSTATE))
> + vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED |
> +   KVM_ARM64_FP_HOST);
> +}
> +
>  /*
>   * If the guest FPSIMD state was loaded, update the host's context
>   * tracking data mark the CPU FPSIMD regs as dirty and belonging to vcpu
>   * so that they will be written back if the kernel clobbers them due to
>   * kernel-mode NEON before re-entry into the guest.
>   */
> -void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu)
> +void kvm_arch_vcpu_sync_fp_after_run(struct kvm_vcpu *vcpu)
>  {
>   WARN_ON_ONCE(!irqs_disabled());
>  
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h 
> b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index 0511af14dc81..65cde758abad 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -25,28 +25,9 @@
>  #include 
>  #include 
>  #include 
> -#include 
>  
>  extern const char __hyp_panic_string[];
>  
> -/* Check whether the FP regs were dirtied while in the host-side run loop: */
> -static inline bool update_fp_enabled(struct kvm_vcpu *vcpu)
> -{
> - /*
> -  * When the system doesn't support FP/SIMD, we cannot rely on
> -  * the _TIF_FOREIGN_FPSTATE flag. However, we always inject an
> -  * abort on the very first access to FP and thus we should never
> -  * see KVM_ARM64_FP_ENABLED. For added safety, make sure we always
> -  * trap the accesses.
> -  */
> - if (!system_supports_fpsimd() ||
> - vcpu->arch.host_thread_info->flags & _TIF_FOREIGN_FPSTATE)
> - vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED |
> -   KVM_ARM64_FP_HOST);
> -
> - return !!(vcpu->arch.flags & 

Re: [PATCH v2 0/4] Manage vcpu flags from the host

2020-07-22 Thread Dave Martin
On Mon, Jul 13, 2020 at 10:05:01PM +0100, Andrew Scull wrote:
> The aim is to keep management of the flags in the host and out of hyp
> where possible. I find this makes it easier to understand how the flags
> are used as the responsibilities are clearly divided.
> 
> The series applies on top of kvmarm/next after VHE and nVHE have been
> separated.

(A commit ID would be useful for someone trying to the patches.)

> From v1 <20200710095754.3641976-1-asc...@google.com>:

(Nit: Is there some easy way of looking up mails by Message-ID?

Otherwise, it can be helpful to have a mail archive URL here, e.g.,
lore.kernel.org)

>  - Split FP change into smaller patches
>  - Addressed Dave's other comments
> 
> Andrew Scull (4):
>   KVM: arm64: Leave KVM_ARM64_DEBUG_DIRTY updates to the host
>   KVM: arm64: Predicate FPSIMD vcpu flags on feature support
>   KVM: arm64: Leave vcpu FPSIMD synchronization in host
>   KVM: arm64: Stop mapping host task thread flags to hyp
> 
>  arch/arm64/include/asm/kvm_host.h |  7 ++-
>  arch/arm64/kvm/arm.c  |  4 +-
>  arch/arm64/kvm/debug.c|  2 +
>  arch/arm64/kvm/fpsimd.c   | 54 ---
>  arch/arm64/kvm/hyp/include/hyp/debug-sr.h |  2 -
>  arch/arm64/kvm/hyp/include/hyp/switch.h   | 19 
>  arch/arm64/kvm/hyp/nvhe/switch.c  |  3 +-
>  arch/arm64/kvm/hyp/vhe/switch.c   |  3 +-
>  8 files changed, 48 insertions(+), 46 deletions(-)
> 
> -- 
> 2.27.0.383.g050319c2ae-goog
> 
> ___
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v2 4/4] KVM: arm64: Stop mapping host task thread flags to hyp

2020-07-22 Thread Dave Martin
On Mon, Jul 13, 2020 at 10:05:05PM +0100, Andrew Scull wrote:

Familiar nits about commit message and Subject line.

> Since hyp now doesn't access the host task's thread flags, there's no
> need to map them up to hyp.
> 
> Signed-off-by: Andrew Scull 

With a reworked commit message:

Reviewed-by: Dave Martin 

> ---
>  arch/arm64/include/asm/kvm_host.h |  2 --
>  arch/arm64/kvm/fpsimd.c   | 11 +--
>  2 files changed, 1 insertion(+), 12 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h 
> b/arch/arm64/include/asm/kvm_host.h
> index 1a062d44b395..fb0dfffa8be1 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -24,7 +24,6 @@
>  #include 
>  #include 
>  #include 
> -#include 
>  
>  #define __KVM_HAVE_ARCH_INTC_INITIALIZED
>  
> @@ -320,7 +319,6 @@ struct kvm_vcpu_arch {
>   struct kvm_guest_debug_arch vcpu_debug_state;
>   struct kvm_guest_debug_arch external_debug_state;
>  
> - struct thread_info *host_thread_info;   /* hyp VA */
>   struct user_fpsimd_state *host_fpsimd_state;/* hyp VA */
>  
>   struct {
> diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
> index 2779cc11f3dd..08ce264c2f41 100644
> --- a/arch/arm64/kvm/fpsimd.c
> +++ b/arch/arm64/kvm/fpsimd.c
> @@ -27,22 +27,13 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu)
>  {
>   int ret;
>  
> - struct thread_info *ti = >thread_info;
>   struct user_fpsimd_state *fpsimd = >thread.uw.fpsimd_state;
>  
> - /*
> -  * Make sure the host task thread flags and fpsimd state are
> -  * visible to hyp:
> -  */
> - ret = create_hyp_mappings(ti, ti + 1, PAGE_HYP);
> - if (ret)
> - goto error;
> -
> + /* Make sure the host task fpsimd state is visible to hyp: */
>   ret = create_hyp_mappings(fpsimd, fpsimd + 1, PAGE_HYP);
>   if (ret)
>   goto error;
>  
> - vcpu->arch.host_thread_info = kern_hyp_va(ti);
>   vcpu->arch.host_fpsimd_state = kern_hyp_va(fpsimd);
>  error:
>   return ret;
> -- 
> 2.27.0.383.g050319c2ae-goog
> 
> ___
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v2 1/4] KVM: arm64: Leave KVM_ARM64_DEBUG_DIRTY updates to the host

2020-07-22 Thread Dave Martin
On Mon, Jul 13, 2020 at 10:05:02PM +0100, Andrew Scull wrote:
> Move the clearing of KVM_ARM64_DEBUG_DIRTY from being one of the last
> things hyp does before exiting to the host to being one of the first
> things the host does after hyp exits.
> 
> This means the host always manages the state of the bit and hyp simply
> respects that in the context switch.
> 
> No functional change.
> 
> Signed-off-by: Andrew Scull 

Seems reasonable, though we have to map the vcpu arch flags into hyp
anyway.  For FPSIMD we do maintain these flags from hyp, in order to
void mapping in host-specific stuff (the thread flags).

So maybe this change isn't that useful?

I don't have a strong opinion though.  If this change fits in better
with the broader KVM work you're doing, I don't see a problem with it.

So, FWIW:

Reviewed-by: Dave Martin 

> ---
>  arch/arm64/include/asm/kvm_host.h | 2 +-
>  arch/arm64/kvm/debug.c| 2 ++
>  arch/arm64/kvm/hyp/include/hyp/debug-sr.h | 2 --
>  3 files changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h 
> b/arch/arm64/include/asm/kvm_host.h
> index e1a32c0707bb..b06f24b5f443 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -404,7 +404,7 @@ struct kvm_vcpu_arch {
>  })
>  
>  /* vcpu_arch flags field values: */
> -#define KVM_ARM64_DEBUG_DIRTY(1 << 0)
> +#define KVM_ARM64_DEBUG_DIRTY(1 << 0) /* vcpu is using debug 
> */
>  #define KVM_ARM64_FP_ENABLED (1 << 1) /* guest FP regs loaded */
>  #define KVM_ARM64_FP_HOST(1 << 2) /* host FP regs loaded */
>  #define KVM_ARM64_HOST_SVE_IN_USE(1 << 3) /* backup for host TIF_SVE */
> diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c
> index 7a7e425616b5..e9932618a362 100644
> --- a/arch/arm64/kvm/debug.c
> +++ b/arch/arm64/kvm/debug.c
> @@ -209,6 +209,8 @@ void kvm_arm_clear_debug(struct kvm_vcpu *vcpu)
>  {
>   trace_kvm_arm_clear_debug(vcpu->guest_debug);
>  
> + vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY;
> +
>   if (vcpu->guest_debug) {
>   restore_guest_debug_regs(vcpu);
>  
> diff --git a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h 
> b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
> index 0297dc63988c..50ca5d048017 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/debug-sr.h
> @@ -161,8 +161,6 @@ static inline void __debug_switch_to_host_common(struct 
> kvm_vcpu *vcpu)
>  
>   __debug_save_state(guest_dbg, guest_ctxt);
>   __debug_restore_state(host_dbg, host_ctxt);
> -
> - vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY;
>  }
>  
>  #endif /* __ARM64_KVM_HYP_DEBUG_SR_H__ */
> -- 
> 2.27.0.383.g050319c2ae-goog
> 
> ___
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v2 2/4] KVM: arm64: Predicate FPSIMD vcpu flags on feature support

2020-07-22 Thread Dave Martin
On Mon, Jul 13, 2020 at 10:05:03PM +0100, Andrew Scull wrote:
> If the system doesn't support FPSIMD features then the flags must never

Mustn't they?  Why not?  I think the flags are currently ignored in this
case, which is just as good.

I'm not disagreeing with the change here; I just want to be clear on the
rationale.

> be set. These are the same feature checks performed by hyp when handling
> an FPSIMD trap.

Nit: Try to ensure that the commit message make sense even without the
subject line: i.e., the subject line is just a one-line summary of the
commit message and should not add any new information.

(This makes life easier for users of mailers that invoke an editor on
the message body only when replying -- i.e., Mutt and probably some
others.  It also helps with understanding the state in .git/rebase-apply/
during a rebase, where the subject line and the rest of the message end
up in different places.)


Also, it's worth nothing the comment additions here, since they look
substantial and it's not clear from just looking at this patch that the
new comments are just clarifying the existing behaviour.

> Signed-off-by: Andrew Scull 
> ---
>  arch/arm64/kvm/fpsimd.c | 24 +++-
>  1 file changed, 19 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
> index 3e081d556e81..c6b3197f6754 100644
> --- a/arch/arm64/kvm/fpsimd.c
> +++ b/arch/arm64/kvm/fpsimd.c
> @@ -52,7 +52,7 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu)
>   * Prepare vcpu for saving the host's FPSIMD state and loading the guest's.
>   * The actual loading is done by the FPSIMD access trap taken to hyp.
>   *
> - * Here, we just set the correct metadata to indicate that the FPSIMD
> + * Here, we just set the correct metadata to indicate whether the FPSIMD
>   * state in the cpu regs (if any) belongs to current on the host.
>   *
>   * TIF_SVE is backed up here, since it may get clobbered with guest state.
> @@ -63,15 +63,29 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
>   BUG_ON(!current->mm);
>  
>   vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED |
> +   KVM_ARM64_FP_HOST |
> KVM_ARM64_HOST_SVE_IN_USE |
> KVM_ARM64_HOST_SVE_ENABLED);
> +
> + if (!system_supports_fpsimd())
> + return;
> +
> + /*
> +  * Having just come from the user task, if any FP state is loaded it
> +  * will be that of the task. Make a note of this but, just before
> +  * entering the vcpu, it will be double checked that the loaded FP
> +  * state isn't transient because things could change between now and
> +  * then.
> +  */

Can we avoid this word "transient"?  Just because the state isn't our
state doesn't mean it will be thrown away.

If the regs contains the state for task foo, and we exit the run loop
before taking an FP trap from the guest, then we might context switch
back to foo before re-entering userspace in the KVM thread.  In that
case the regs aren't reloaded.  Unless someone called
fpsimd_flush_cpu_state() in the meantime, the regs will be assumed still
to be correctly loaded for foo.

To be clear, TIF_FOREIGN_FPSTATE doesn't mean that the regs are garbage,
just that they don't contain the right state for current.


This may not matter that much for this code, but I don't want people to
get confused when maintaining related code...


Here, does it make sense to say something like:

--8<--

Having just come from the user task, if the FP regs contain state for
current then it is definitely host user state, not vcpu state.  Note
this here, ready for the first entry to the guest.

-->8--

>   vcpu->arch.flags |= KVM_ARM64_FP_HOST;
>  
> - if (test_thread_flag(TIF_SVE))
> - vcpu->arch.flags |= KVM_ARM64_HOST_SVE_IN_USE;
> + if (system_supports_sve()) {
> + if (test_thread_flag(TIF_SVE))
> + vcpu->arch.flags |= KVM_ARM64_HOST_SVE_IN_USE;
>  
> - if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
> - vcpu->arch.flags |= KVM_ARM64_HOST_SVE_ENABLED;
> + if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
> + vcpu->arch.flags |= KVM_ARM64_HOST_SVE_ENABLED;
> + }
>  }

[...]

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 2/2] KVM: arm64: Leave vcpu FPSIMD synchronization in host

2020-07-14 Thread Dave Martin
On Mon, Jul 13, 2020 at 09:42:04PM +0100, Andrew Scull wrote:
> On Mon, Jul 13, 2020 at 05:04:21PM +0100, Dave Martin wrote:
> > On Fri, Jul 10, 2020 at 10:57:54AM +0100, Andrew Scull wrote:
> > > The task state can be checked by the host and the vcpu flags updated
> > > before calling into hyp. This more neatly separates the concerns and
> > > removes the need to map the task flags to EL2.
> > > 
> > > Hyp acts on the state provided to it by the host and updates it when
> > > switching to the vcpu state.
> > 
> > Can this patch be split up?  We have a few overlapping changes here.
> > 
> > i.e., renaming and refactoring of hooks; moving some code around; and a
> > couple of other changes that are not directly related (noted below).
> 
> Indeed it can, into at least 3.
> 
> > Overall this looks like a decent cleanup however.  It was always a bit
> > nasty to have to map the thread flags into Hyp.
> 
> Glad to hear, I'll have to get it in a better shape.
> 
> > Side question: currently we do fpsimd_save_and_flush_cpu_state() in
> > kvm_arch_vcpu_put_fp().  Can we remove the flush so that the vcpu state
> > lingers in the CPU regs and can be reclaimed when switching back to the
> > KVM thread?
> > 
> > This could remove some overhead when the KVM run loop is preempted by a
> > kernel thread and subsequently resumed without passing through userspace.
> > 
> > We would need to flush this state when anything else tries to change the
> > vcpu FP regs, which is one reason I skipped this previously: it would
> > require a bit of refactoring of fpsimd_flush_task_state() so that a non-
> > task context can also be flushed.
> > 
> > (This isn't directly related to this series.)
> 
> I don't plan to address this at the moment but I do believe there are
> chances to reduce the need for saves and restores. If the flush is
> removed a similar check to that done for tasks could also apply to vCPUs
> i.e. if the last FPSIMD state this CPU had was the vCPU and the vCPU
> last ran on this CPU then the vCPU's FPSIMD state is already loaded.

Sounds reasonable.

As you observe, this is mostly a case of refactoring the code a bit and
making the vcpu context slightly less of a special case.

(And if you don't do it, no worries -- I just couldn't resist getting it
done "for free" ;)

> 
> > Additional minor comments below.
> > 
> > > 
> > > No functional change.
> > > 
> > > Signed-off-by: Andrew Scull 
> > > ---
> > >  arch/arm64/include/asm/kvm_host.h   |  5 +--
> > >  arch/arm64/kvm/arm.c|  4 +-
> > >  arch/arm64/kvm/fpsimd.c | 57 ++---
> > >  arch/arm64/kvm/hyp/include/hyp/switch.h | 19 -
> > >  arch/arm64/kvm/hyp/nvhe/switch.c|  3 +-
> > >  arch/arm64/kvm/hyp/vhe/switch.c |  3 +-
> > >  6 files changed, 48 insertions(+), 43 deletions(-)
> > > 
> > > diff --git a/arch/arm64/include/asm/kvm_host.h 
> > > b/arch/arm64/include/asm/kvm_host.h
> > > index b06f24b5f443..ca1621eeb9d9 100644
> > > --- a/arch/arm64/include/asm/kvm_host.h
> > > +++ b/arch/arm64/include/asm/kvm_host.h
> > > @@ -24,7 +24,6 @@
> > >  #include 
> > >  #include 
> > >  #include 
> > > -#include 
> > >  
> > >  #define __KVM_HAVE_ARCH_INTC_INITIALIZED
> > >  
> > > @@ -320,7 +319,6 @@ struct kvm_vcpu_arch {
> > >   struct kvm_guest_debug_arch vcpu_debug_state;
> > >   struct kvm_guest_debug_arch external_debug_state;
> > >  
> > > - struct thread_info *host_thread_info;   /* hyp VA */
> > >   struct user_fpsimd_state *host_fpsimd_state;/* hyp VA */
> > >  
> > >   struct {
> > > @@ -616,7 +614,8 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
> > >  /* Guest/host FPSIMD coordination helpers */
> > >  int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
> > >  void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu);
> > > -void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu);
> > > +void kvm_arch_vcpu_enter_ctxsync_fp(struct kvm_vcpu *vcpu);
> > > +void kvm_arch_vcpu_exit_ctxsync_fp(struct kvm_vcpu *vcpu);
> > 
> > I find these names a bit confusing.
> > 
> > Maybe
> > 
> > kvm_arch_vcpu_ctxsync_fp_before_guest_enter()
> > kvm_arch_vcpu_ctxsync_fp_after_guest_exit()
> > 
> > (we could probably drop the "ctx" to make these slightly shorter).
> 
> Changed to kvm_arch_vcpu_sync_fp_{b

Re: [PATCH 2/2] KVM: arm64: Leave vcpu FPSIMD synchronization in host

2020-07-13 Thread Dave Martin
On Fri, Jul 10, 2020 at 10:57:54AM +0100, Andrew Scull wrote:
> The task state can be checked by the host and the vcpu flags updated
> before calling into hyp. This more neatly separates the concerns and
> removes the need to map the task flags to EL2.
> 
> Hyp acts on the state provided to it by the host and updates it when
> switching to the vcpu state.

Can this patch be split up?  We have a few overlapping changes here.

i.e., renaming and refactoring of hooks; moving some code around; and a
couple of other changes that are not directly related (noted below).

Overall this looks like a decent cleanup however.  It was always a bit
nasty to have to map the thread flags into Hyp.



Side question: currently we do fpsimd_save_and_flush_cpu_state() in
kvm_arch_vcpu_put_fp().  Can we remove the flush so that the vcpu state
lingers in the CPU regs and can be reclaimed when switching back to the
KVM thread?

This could remove some overhead when the KVM run loop is preempted by a
kernel thread and subsequently resumed without passing through userspace.

We would need to flush this state when anything else tries to change the
vcpu FP regs, which is one reason I skipped this previously: it would
require a bit of refactoring of fpsimd_flush_task_state() so that a non-
task context can also be flushed.

(This isn't directly related to this series.)



Additional minor comments below.

> 
> No functional change.
> 
> Signed-off-by: Andrew Scull 
> ---
>  arch/arm64/include/asm/kvm_host.h   |  5 +--
>  arch/arm64/kvm/arm.c|  4 +-
>  arch/arm64/kvm/fpsimd.c | 57 ++---
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 19 -
>  arch/arm64/kvm/hyp/nvhe/switch.c|  3 +-
>  arch/arm64/kvm/hyp/vhe/switch.c |  3 +-
>  6 files changed, 48 insertions(+), 43 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h 
> b/arch/arm64/include/asm/kvm_host.h
> index b06f24b5f443..ca1621eeb9d9 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -24,7 +24,6 @@
>  #include 
>  #include 
>  #include 
> -#include 
>  
>  #define __KVM_HAVE_ARCH_INTC_INITIALIZED
>  
> @@ -320,7 +319,6 @@ struct kvm_vcpu_arch {
>   struct kvm_guest_debug_arch vcpu_debug_state;
>   struct kvm_guest_debug_arch external_debug_state;
>  
> - struct thread_info *host_thread_info;   /* hyp VA */
>   struct user_fpsimd_state *host_fpsimd_state;/* hyp VA */
>  
>   struct {
> @@ -616,7 +614,8 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
>  /* Guest/host FPSIMD coordination helpers */
>  int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu);
>  void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu);
> -void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu);
> +void kvm_arch_vcpu_enter_ctxsync_fp(struct kvm_vcpu *vcpu);
> +void kvm_arch_vcpu_exit_ctxsync_fp(struct kvm_vcpu *vcpu);

I find these names a bit confusing.

Maybe

kvm_arch_vcpu_ctxsync_fp_before_guest_enter()
kvm_arch_vcpu_ctxsync_fp_after_guest_exit()

(we could probably drop the "ctx" to make these slightly shorter).

>  void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu);
>  
>  static inline bool kvm_pmu_counter_deferred(struct perf_event_attr *attr)
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 98f05bdac3c1..c7a711ca840e 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -682,6 +682,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
>  
>   local_irq_disable();
>  
> + kvm_arch_vcpu_enter_ctxsync_fp(vcpu);
> +
>   kvm_vgic_flush_hwstate(vcpu);
>  
>   /*
> @@ -769,7 +771,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
>   if (static_branch_unlikely(_irqchip_in_use))
>   kvm_timer_sync_user(vcpu);
>  
> - kvm_arch_vcpu_ctxsync_fp(vcpu);
> + kvm_arch_vcpu_exit_ctxsync_fp(vcpu);
>  
>   /*
>* We may have taken a host interrupt in HYP mode (ie
> diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
> index 3e081d556e81..aec91f43df62 100644
> --- a/arch/arm64/kvm/fpsimd.c
> +++ b/arch/arm64/kvm/fpsimd.c
> @@ -27,22 +27,13 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu)
>  {
>   int ret;
>  
> - struct thread_info *ti = >thread_info;
>   struct user_fpsimd_state *fpsimd = >thread.uw.fpsimd_state;
>  
> - /*
> -  * Make sure the host task thread flags and fpsimd state are
> -  * visible to hyp:
> -  */
> - ret = create_hyp_mappings(ti, ti + 1, PAGE_HYP);
> - if (ret)
> - goto error;
> -
> + /* Make sure the host task fpsimd state is visible to hyp: */
>   ret = create_hyp_mappings(fpsimd, fpsimd + 1, PAGE_HYP);
>   if (ret)
>   goto error;
>  
> - vcpu->arch.host_thread_info = kern_hyp_va(ti);
>   vcpu->arch.host_fpsimd_state = kern_hyp_va(fpsimd);
>  error:
>   

Re: [PATCH] arm64: kvm: Remove redundant KVM_ARM64_FP_HOST flag

2020-07-13 Thread Dave Martin
On Tue, Jul 07, 2020 at 03:57:13PM +0100, Andrew Scull wrote:
> The FPSIMD registers can be in one of three states:
>  (a) loaded with the user task's state
>  (b) loaded with the vcpu's state
>  (c) dirty with transient state
> 
> KVM_ARM64_FP_HOST identifies the case (a). When loading the vcpu state,
> this is used to decide whether to save the current FPSIMD registers to
> the user task.
> 
> However, at the point of loading the vcpu's FPSIMD state, it is known
> that we are not in state (b). States (a) and (c) can be distinguished by
> by checking the TIF_FOREIGN_FPSTATE bit, as was previously being done to
> prepare the KVM_ARM64_FP_HOST flag but without the need for mirroring
> the state.
> 
> Signed-off-by: Andrew Scull 

Is your new series [1] intended to replace this, or do I need to look at
both series now?

Cheers
---Dave

[1] Manage vcpu flags from the host
https://lists.cs.columbia.edu/pipermail/kvmarm/2020-July/041531.html

> ---
> This is the result of trying to get my head around the FPSIMD handling.
> If I've misunderstood something I'll be very happy to have it explained
> to me :)
> ---
>  arch/arm64/include/asm/kvm_host.h   | 11 +
>  arch/arm64/kvm/fpsimd.c |  1 -
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 30 +
>  3 files changed, 26 insertions(+), 16 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h 
> b/arch/arm64/include/asm/kvm_host.h
> index e0920df1d0c1..d3652745282d 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -370,12 +370,11 @@ struct kvm_vcpu_arch {
>  /* vcpu_arch flags field values: */
>  #define KVM_ARM64_DEBUG_DIRTY(1 << 0)
>  #define KVM_ARM64_FP_ENABLED (1 << 1) /* guest FP regs loaded */
> -#define KVM_ARM64_FP_HOST(1 << 2) /* host FP regs loaded */
> -#define KVM_ARM64_HOST_SVE_IN_USE(1 << 3) /* backup for host TIF_SVE */
> -#define KVM_ARM64_HOST_SVE_ENABLED   (1 << 4) /* SVE enabled for EL0 */
> -#define KVM_ARM64_GUEST_HAS_SVE  (1 << 5) /* SVE exposed to 
> guest */
> -#define KVM_ARM64_VCPU_SVE_FINALIZED (1 << 6) /* SVE config completed */
> -#define KVM_ARM64_GUEST_HAS_PTRAUTH  (1 << 7) /* PTRAUTH exposed to guest */
> +#define KVM_ARM64_HOST_SVE_IN_USE(1 << 2) /* backup for host TIF_SVE */
> +#define KVM_ARM64_HOST_SVE_ENABLED   (1 << 3) /* SVE enabled for EL0 */
> +#define KVM_ARM64_GUEST_HAS_SVE  (1 << 4) /* SVE exposed to 
> guest */
> +#define KVM_ARM64_VCPU_SVE_FINALIZED (1 << 5) /* SVE config completed */
> +#define KVM_ARM64_GUEST_HAS_PTRAUTH  (1 << 6) /* PTRAUTH exposed to guest */
>  
>  #define vcpu_has_sve(vcpu) (system_supports_sve() && \
>   ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
> diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
> index e329a36b2bee..4e9afeb31989 100644
> --- a/arch/arm64/kvm/fpsimd.c
> +++ b/arch/arm64/kvm/fpsimd.c
> @@ -65,7 +65,6 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
>   vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED |
> KVM_ARM64_HOST_SVE_IN_USE |
> KVM_ARM64_HOST_SVE_ENABLED);
> - vcpu->arch.flags |= KVM_ARM64_FP_HOST;
>  
>   if (test_thread_flag(TIF_SVE))
>   vcpu->arch.flags |= KVM_ARM64_HOST_SVE_IN_USE;
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h 
> b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index 8f622688fa64..beadf17f12a6 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -33,16 +33,24 @@ extern const char __hyp_panic_string[];
>  static inline bool update_fp_enabled(struct kvm_vcpu *vcpu)
>  {
>   /*
> -  * When the system doesn't support FP/SIMD, we cannot rely on
> -  * the _TIF_FOREIGN_FPSTATE flag. However, we always inject an
> -  * abort on the very first access to FP and thus we should never
> -  * see KVM_ARM64_FP_ENABLED. For added safety, make sure we always
> +  * When entering the vcpu during a KVM_VCPU_RUN call before the vcpu
> +  * has used FPSIMD, FPSIMD is disabled for the vcpu and will trap when
> +  * it is first used. The FPSIMD state currently bound to the cpu is
> +  * that of the user task.
> +  *
> +  * After the vcpu has used FPSIMD, on subsequent entries into the vcpu
> +  * for the same KVM_VCPU_RUN call, the vcpu's FPSIMD state is bound to
> +  * the cpu. Therefore, if _TIF_FOREIGN_FPSTATE is set, we know the
> +  * FPSIMD registers no longer contain the vcpu's state. In this case we
> +  * must, once again, disable FPSIMD.
> +  *
> +  * When the system doesn't support FPSIMD, we cannot rely on the
> +  * _TIF_FOREIGN_FPSTATE flag. For added safety, make sure we always
>* trap the accesses.
>*/
>   if (!system_supports_fpsimd() ||
>   vcpu->arch.host_thread_info->flags & _TIF_FOREIGN_FPSTATE)
> - 

Re: [PATCH] arm64: kvm: Remove redundant KVM_ARM64_FP_HOST flag

2020-07-08 Thread Dave Martin
On Tue, Jul 07, 2020 at 10:33:50PM +0100, Andrew Scull wrote:
> On Tue, Jul 07, 2020 at 05:59:58PM +0100, Dave Martin wrote:
> > On Tue, Jul 07, 2020 at 03:57:13PM +0100, Andrew Scull wrote:
> > > The FPSIMD registers can be in one of three states:
> > >  (a) loaded with the user task's state
> > >  (b) loaded with the vcpu's state
> > >  (c) dirty with transient state
> > > 
> > > KVM_ARM64_FP_HOST identifies the case (a). When loading the vcpu state,
> > > this is used to decide whether to save the current FPSIMD registers to
> > > the user task.
> > > 
> > > However, at the point of loading the vcpu's FPSIMD state, it is known
> > > that we are not in state (b). States (a) and (c) can be distinguished by
> > > by checking the TIF_FOREIGN_FPSTATE bit, as was previously being done to
> > > prepare the KVM_ARM64_FP_HOST flag but without the need for mirroring
> > > the state.
> > 
> > In general there's another case
> > 
> > (d) loaded with some unrelated user task's state
> > 
> > I have a vague memory that the hyp trap code is supposed to save state
> > back to whatever task it belonged to -- but functions like
> > kvm_arch_vcpu_run_map_fp() make me suspicious that if this can happen,
> > it doesn't work correctly.
> > 
> > Since you're digging anyway, I'll answer in the form of a question:
> > when we reach __hyp_handle_fpsimd(), can the state in the FPSIMD/SVE
> > regs be unsaved data belonging to another task?  I'd hope not, because
> > fpsimd_thread_switch() should have saved any dirty regs when scheduling
> > that other thread out.
> 
> IIUC, (d) would fall under state (c) as the switch to the vcpu's user
> task will have set the TIF_FOREIGN_FPSTATE bit after saving the previous
> task's state in fpsimd_thread_switch, as you hoped.
> 
> > If the regs can't be owned by another task, then there may be some scope
> > for simplifying the code along the lines you suggest...
> > 
> > (See also below)
> > 
> > > 
> > > Signed-off-by: Andrew Scull 
> > > ---
> > > This is the result of trying to get my head around the FPSIMD handling.
> > > If I've misunderstood something I'll be very happy to have it explained
> > > to me :)
> > 
> > Er, me too.  It's a while since I worked on this ;)
> 
> Thank you for the detailed reply, it's given me some very helpful
> context!
> 
> > > ---
> > >  arch/arm64/include/asm/kvm_host.h   | 11 +
> > >  arch/arm64/kvm/fpsimd.c |  1 -
> > >  arch/arm64/kvm/hyp/include/hyp/switch.h | 30 +
> > >  3 files changed, 26 insertions(+), 16 deletions(-)
> > > 
> > > diff --git a/arch/arm64/include/asm/kvm_host.h 
> > > b/arch/arm64/include/asm/kvm_host.h
> > > index e0920df1d0c1..d3652745282d 100644
> > > --- a/arch/arm64/include/asm/kvm_host.h
> > > +++ b/arch/arm64/include/asm/kvm_host.h
> > > @@ -370,12 +370,11 @@ struct kvm_vcpu_arch {
> > >  /* vcpu_arch flags field values: */
> > >  #define KVM_ARM64_DEBUG_DIRTY(1 << 0)
> > >  #define KVM_ARM64_FP_ENABLED (1 << 1) /* guest FP regs 
> > > loaded */
> > > -#define KVM_ARM64_FP_HOST(1 << 2) /* host FP regs loaded 
> > > */
> > > -#define KVM_ARM64_HOST_SVE_IN_USE(1 << 3) /* backup for host 
> > > TIF_SVE */
> > > -#define KVM_ARM64_HOST_SVE_ENABLED   (1 << 4) /* SVE enabled for EL0 
> > > */
> > > -#define KVM_ARM64_GUEST_HAS_SVE  (1 << 5) /* SVE exposed to 
> > > guest */
> > > -#define KVM_ARM64_VCPU_SVE_FINALIZED (1 << 6) /* SVE config 
> > > completed */
> > > -#define KVM_ARM64_GUEST_HAS_PTRAUTH  (1 << 7) /* PTRAUTH exposed to 
> > > guest */
> > > +#define KVM_ARM64_HOST_SVE_IN_USE(1 << 2) /* backup for host 
> > > TIF_SVE */
> > > +#define KVM_ARM64_HOST_SVE_ENABLED   (1 << 3) /* SVE enabled for EL0 
> > > */
> > > +#define KVM_ARM64_GUEST_HAS_SVE  (1 << 4) /* SVE exposed to 
> > > guest */
> > > +#define KVM_ARM64_VCPU_SVE_FINALIZED (1 << 5) /* SVE config 
> > > completed */
> > > +#define KVM_ARM64_GUEST_HAS_PTRAUTH  (1 << 6) /* PTRAUTH exposed to 
> > > guest */
> > >  
> > >  #define vcpu_has_sve(vcpu) (system_supports_sve() && \
> > >   ((vcpu)->arch.flags &

Re: [PATCH] arm64: kvm: Remove redundant KVM_ARM64_FP_HOST flag

2020-07-07 Thread Dave Martin
On Tue, Jul 07, 2020 at 03:57:13PM +0100, Andrew Scull wrote:
> The FPSIMD registers can be in one of three states:
>  (a) loaded with the user task's state
>  (b) loaded with the vcpu's state
>  (c) dirty with transient state
> 
> KVM_ARM64_FP_HOST identifies the case (a). When loading the vcpu state,
> this is used to decide whether to save the current FPSIMD registers to
> the user task.
> 
> However, at the point of loading the vcpu's FPSIMD state, it is known
> that we are not in state (b). States (a) and (c) can be distinguished by
> by checking the TIF_FOREIGN_FPSTATE bit, as was previously being done to
> prepare the KVM_ARM64_FP_HOST flag but without the need for mirroring
> the state.

In general there's another case

(d) loaded with some unrelated user task's state

I have a vague memory that the hyp trap code is supposed to save state
back to whatever task it belonged to -- but functions like
kvm_arch_vcpu_run_map_fp() make me suspicious that if this can happen,
it doesn't work correctly.

Since you're digging anyway, I'll answer in the form of a question:
when we reach __hyp_handle_fpsimd(), can the state in the FPSIMD/SVE
regs be unsaved data belonging to another task?  I'd hope not, because
fpsimd_thread_switch() should have saved any dirty regs when scheduling
that other thread out.

If the regs can't be owned by another task, then there may be some scope
for simplifying the code along the lines you suggest...

(See also below)

> 
> Signed-off-by: Andrew Scull 
> ---
> This is the result of trying to get my head around the FPSIMD handling.
> If I've misunderstood something I'll be very happy to have it explained
> to me :)

Er, me too.  It's a while since I worked on this ;)

> ---
>  arch/arm64/include/asm/kvm_host.h   | 11 +
>  arch/arm64/kvm/fpsimd.c |  1 -
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 30 +
>  3 files changed, 26 insertions(+), 16 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h 
> b/arch/arm64/include/asm/kvm_host.h
> index e0920df1d0c1..d3652745282d 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -370,12 +370,11 @@ struct kvm_vcpu_arch {
>  /* vcpu_arch flags field values: */
>  #define KVM_ARM64_DEBUG_DIRTY(1 << 0)
>  #define KVM_ARM64_FP_ENABLED (1 << 1) /* guest FP regs loaded */
> -#define KVM_ARM64_FP_HOST(1 << 2) /* host FP regs loaded */
> -#define KVM_ARM64_HOST_SVE_IN_USE(1 << 3) /* backup for host TIF_SVE */
> -#define KVM_ARM64_HOST_SVE_ENABLED   (1 << 4) /* SVE enabled for EL0 */
> -#define KVM_ARM64_GUEST_HAS_SVE  (1 << 5) /* SVE exposed to 
> guest */
> -#define KVM_ARM64_VCPU_SVE_FINALIZED (1 << 6) /* SVE config completed */
> -#define KVM_ARM64_GUEST_HAS_PTRAUTH  (1 << 7) /* PTRAUTH exposed to guest */
> +#define KVM_ARM64_HOST_SVE_IN_USE(1 << 2) /* backup for host TIF_SVE */
> +#define KVM_ARM64_HOST_SVE_ENABLED   (1 << 3) /* SVE enabled for EL0 */
> +#define KVM_ARM64_GUEST_HAS_SVE  (1 << 4) /* SVE exposed to 
> guest */
> +#define KVM_ARM64_VCPU_SVE_FINALIZED (1 << 5) /* SVE config completed */
> +#define KVM_ARM64_GUEST_HAS_PTRAUTH  (1 << 6) /* PTRAUTH exposed to guest */
>  
>  #define vcpu_has_sve(vcpu) (system_supports_sve() && \
>   ((vcpu)->arch.flags & KVM_ARM64_GUEST_HAS_SVE))
> diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
> index e329a36b2bee..4e9afeb31989 100644
> --- a/arch/arm64/kvm/fpsimd.c
> +++ b/arch/arm64/kvm/fpsimd.c
> @@ -65,7 +65,6 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
>   vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED |
> KVM_ARM64_HOST_SVE_IN_USE |
> KVM_ARM64_HOST_SVE_ENABLED);
> - vcpu->arch.flags |= KVM_ARM64_FP_HOST;

I'm wondering whether the original code is buggy here.

If the FPSIMD/SVE regs contain some other task's data, we'd overwrite
current's regs with that data when running __hyp_handle_fpsimd().

Maybe we should have been checking TIF_FOREIGN_FPSTATE here.  If the
issue can happen, your version may fix it.

If we wanted to keep the separate flag (see below for some rationale),
it might make sense to do:

vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED |
  KVM_ARM64_HOST_SVE_IN_USE |
  KVM_ARM64_HOST_SVE_ENABLED |
  KVM_ARM64_FP_HOST);
if (!test_thread_flag(TIF_FOREIGN_FPSTATE))
vcpu->arch.flags |= KVM_ARM64_FP_HOST;

[...]

>   if (test_thread_flag(TIF_SVE))
>   vcpu->arch.flags |= KVM_ARM64_HOST_SVE_IN_USE;
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h 
> b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index 8f622688fa64..beadf17f12a6 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -33,16 +33,24 @@ extern const char 

Re: [RFC PATCH 0/2] MTE support for KVM guest

2020-06-24 Thread Dave Martin
On Wed, Jun 24, 2020 at 10:38:48AM +0100, Catalin Marinas wrote:
> On Tue, Jun 23, 2020 at 07:05:07PM +0100, Peter Maydell wrote:
> > On Wed, 17 Jun 2020 at 13:39, Steven Price  wrote:
> > > These patches add support to KVM to enable MTE within a guest. It is
> > > based on Catalin's v4 MTE user space series[1].
> > >
> > > [1] 
> > > http://lkml.kernel.org/r/20200515171612.1020-1-catalin.marinas%40arm.com
> > >
> > > Posting as an RFC as I'd like feedback on the approach taken.
> > 
> > What's your plan for handling tags across VM migration?
> > Will the kernel expose the tag ram to userspace so we
> > can copy it from the source machine to the destination
> > at the same time as we copy the actual ram contents ?
> 
> Qemu can map the guest memory with PROT_MTE and access the tags directly
> with LDG/STG instructions. Steven was actually asking in the cover
> letter whether we should require that the VMM maps the guest memory with
> PROT_MTE as a guarantee that it can access the guest tags.
> 
> There is no architecturally visible tag ram (tag storage), that's a
> microarchitecture detail.

If userspace maps the guest memory with PROT_MTE for dump purposes,
isn't it going to get tag check faults when accessing the memory
(i.e., when dumping the regular memory content, not the tags
specifically).

Does it need to map two aliases, one with PROT_MTE and one without,
and is that architecturally valid?

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 0/4] KVM/arm64: Enable PtrAuth on non-VHE KVM

2020-06-15 Thread Dave Martin
On Mon, Jun 15, 2020 at 02:22:19PM +0100, Marc Zyngier wrote:
> Hi Dave,
> 
> On 2020-06-15 13:59, Dave Martin wrote:
> >On Mon, Jun 15, 2020 at 09:19:50AM +0100, Marc Zyngier wrote:
> >>Not having PtrAuth on non-VHE KVM (for whatever reason VHE is not
> >>enabled on a v8.3 system) has always looked like an oddity. This
> >>trivial series remedies it, and allows a non-VHE KVM to offer PtrAuth
> >>to its guests.
> >
> >How likely do you think it is that people will use such a configuration?
> 
> Depending on the use case, very. See below.
> 
> >The only reason I can see for people to build a kernel with CONFIG_VHE=n
> >is as a workaround for broken hardware, or because the kernel is too old
> >to support VHE (in which case it doesn't understand ptrauth either, so
> >it is irrelevant whether ptrauth depends on VHE).
> 
> Part of the work happening around running protected VMs (which cannot
> be tampered with from EL1/0 host) makes it mandatory to disable VHE,
> so that we can wrap the host EL1 in its own Stage-2 page tables.
> We (the Android kernel team) are actively working on enabling this
> feature.
> 
> >I wonder whether it's therefore better to "encourage" people to turn
> >VHE on by making subsequent features depend on it where appropriate.
> >We do want multiplatform kernels to be configured with CONFIG_VHE=y for
> >example.
> 
> I'm all for having VHE on for platforms that support it. Which is why
> CONFIG_VHE=y is present in defconfig. However, we cannot offer the same
> level of guarantee as we can hopefully achieve with non-VHE (we can
> drop mappings from Stage-1, but can't protect VMs from an evil or
> compromised host). This is a very different use case from the usual
> "reduced hypervisor overhead" that we want in the general case.
> 
> >I ask this, because SVE suffers the same "oddity".  If SVE can be
> >enabled for non-VHE kernels straightforwardly then there's no reason not
> >to do so, but I worried in the past that this would duplicate complex
> >code that would never be tested or used.
> 
> It is a concern. I guess that if we manage to get some traction on
> Android, then the feature will get some testing! And yes, SVE is
> next on my list.
> 
> >If supporting ptrauth with !VHE is as simple as this series suggests,
> >then it's low-risk.  Perhaps SVE isn't much worse.  I was chasing nasty
> >bugs around at the time the SVE KVM support was originally written, and
> >didn't want to add more unknowns into the mix...
> 
> I think having started with a slightly smaller problem space was the
> right thing to do at the time. We are now reasonably confident that
> KVM and SVE are working correctly together, and we can now try to enable
> it on !VHE.

Cool, now I understand.

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 0/4] KVM/arm64: Enable PtrAuth on non-VHE KVM

2020-06-15 Thread Dave Martin
On Mon, Jun 15, 2020 at 09:19:50AM +0100, Marc Zyngier wrote:
> Not having PtrAuth on non-VHE KVM (for whatever reason VHE is not
> enabled on a v8.3 system) has always looked like an oddity. This
> trivial series remedies it, and allows a non-VHE KVM to offer PtrAuth
> to its guests.

How likely do you think it is that people will use such a configuration?

The only reason I can see for people to build a kernel with CONFIG_VHE=n
is as a workaround for broken hardware, or because the kernel is too old
to support VHE (in which case it doesn't understand ptrauth either, so
it is irrelevant whether ptrauth depends on VHE).

I wonder whether it's therefore better to "encourage" people to turn
VHE on by making subsequent features depend on it where appropriate.
We do want multiplatform kernels to be configured with CONFIG_VHE=y for
example.


I ask this, because SVE suffers the same "oddity".  If SVE can be
enabled for non-VHE kernels straightforwardly then there's no reason not
to do so, but I worried in the past that this would duplicate complex
code that would never be tested or used.

If supporting ptrauth with !VHE is as simple as this series suggests,
then it's low-risk.  Perhaps SVE isn't much worse.  I was chasing nasty
bugs around at the time the SVE KVM support was originally written, and
didn't want to add more unknowns into the mix...

(Note, this is not an offer from me to do the SVE work!)

[...]

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [RFC] Add virtual SDEI support in qemu

2019-07-16 Thread Dave Martin
On Mon, Jul 15, 2019 at 03:44:46PM +0100, Mark Rutland wrote:
> On Mon, Jul 15, 2019 at 03:26:39PM +0100, James Morse wrote:
> > On 15/07/2019 14:48, Mark Rutland wrote:
> > > On Mon, Jul 15, 2019 at 02:41:00PM +0100, Dave Martin wrote:
> > >> One option (suggested to me by James Morse) would be to allow userspace
> > >> to disable in the in-kernel PSCI implementation and provide its own
> > >> PSCI to the guest via SMC -- in which case userspace that wants to
> > >> implement SDEI would have to implement PSCI as well.
> > > 
> > > I think this would be the best approach, since it puts userspace in
> > > charge of everything.
> > > 
> > > However, this interacts poorly with FW-based mitigations that we
> > > implement in hyp. I suspect we'd probably need a mechanism to delegate
> > > that responsibility back to the kernel, and figure out if that has any
> > > interaction with thigns that got punted to userspace...
> > 
> > This has come up before:
> > https://lore.kernel.org/r/59c139d0.3040...@arm.com
> > 
> > I agree Qemu should opt-in to this, it needs to be a feature that is 
> > enabled.
> > 
> > I had an early version of something like this for testing SDEI before
> > there was firmware available. The review feedback from Christoffer was
> > that it should include HVC and SMC, their immediates, and shouldn't be
> > tied to SMC-CC ranges.
> > 
> > I think this should be a catch-all as Heyi describes to deliver
> > 'unhandled SMC/HVC' to user-space as hypercall exits. We should
> > include the immediate in the struct.
> > 
> > We can allow Qemu to disable the in-kernel PSCI implementation, which
> > would let it be done in user-space via this catch-all mechanism. (PSCI
> > in user-space has come up on another thread recently). The in-kernel
> > PSCI needs to be default-on for backwards compatibility.
> > 
> > As Mark points out, the piece that's left is the 'arch workaround'
> > stuff. We always need to handle these in the kernel. I don't think
> > these should be routed-back, they should be un-obtainable by
> > user-space.
> 
> Sure; I meant that those should be handled in the kernel rather than
> going to host userspace and back.
> 
> I was suggesting was that userspace would opt into taking ownership of
> all HVC calls, then explicitly opt-in to the kernel handling specific
> (sets of) calls.

The most logical thing to do would be to have userspace handle all
calls, but add an ioctl to forward a call to KVM.  This puts userspace
in charge of the SMCCC interface, with KVM handling only those things
that userspace can't do for itself, on request.

If the performance overhead is unacceptable for certain calls, we could
have a way to delegate specific function IDs to KVM.  I suspect that
will be the exception rather than the rule.

> There are probably issues with that, but I suspect defining "all
> undandled calls" will be problematic otherwise.

Agreed: the set of calls not handled by KVM will mutate over time.

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [RFC] Add virtual SDEI support in qemu

2019-07-16 Thread Dave Martin
On Mon, Jul 15, 2019 at 02:48:49PM +0100, Mark Rutland wrote:
> On Mon, Jul 15, 2019 at 02:41:00PM +0100, Dave Martin wrote:

[...]

> > So long as KVM_EXIT_HYPERCALL reports sufficient information so that
> > userspace can identify the cause as an SMC and retrieve the SMC
> > immediate field, this seems feasible.
> > 
> > For its own SMCCC APIs, KVM exclusively uses HVC, so rerouting SMC to
> > userspace shouldn't conflict.
> 
> Be _very_ careful here! In systems without EL3 (and without NV), SMC
> UNDEFs rather than trapping to EL2. Given that, we shouldn't build a
> hypervisor ABI that depends on SMC.

Good point.  I was hoping that was all ancient history by now, but if
not...

[...]

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [RFC] Add virtual SDEI support in qemu

2019-07-15 Thread Dave Martin
On Sat, Jul 13, 2019 at 05:53:57PM +0800, Guoheyi wrote:
> Hi folks,
> 
> Do it make sense to implement virtual SDEI in qemu? So that we can have the
> standard way for guest to handle NMI watchdog, RAS events and something else
> which involves SDEI in a physical ARM64 machine.
> 
> My basic idea is like below:
> 
> 1. Change a few lines of code in kvm to allow unhandled SMC invocations
> (like SDEI) to be sent to qemu, with exit reason of KVM_EXIT_HYPERCALL, so
> we don't need to add new API.

So long as KVM_EXIT_HYPERCALL reports sufficient information so that
userspace can identify the cause as an SMC and retrieve the SMC
immediate field, this seems feasible.

For its own SMCCC APIs, KVM exclusively uses HVC, so rerouting SMC to
userspace shouldn't conflict.

This bouncing of SMCs to userspace would need to be opt-in, otherwise
old userspace would see exits that it doesn't know what to do with.

> 2. qemu handles supported SDEI calls just as the spec says for what a
> hypervisor should do for a guest OS.
> 
> 3. For interrupts bound to hypervisor, qemu should stop injecting the IRQ to
> guest through KVM, but jump to the registered event handler directly,
> including context saving and restoring. Some interrupts like virtual timer
> are handled by kvm directly, so we may refuse to bind such interrupts to
> SDEI events.

Something like that.

Interactions between SDEI and PSCI would need some thought: for example,
after PSCI_CPU_ON, the newly online cpu needs to have SDEs masked.

One option (suggested to me by James Morse) would be to allow userspace
to disable in the in-kernel PSCI implementation and provide its own
PSCI to the guest via SMC -- in which case userspace that wants to
implement SDEI would have to implement PSCI as well.

There may be reasons why this wouldn't work ... I haven't thought about
it in depth.

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH] KVM: arm64/sve: Fix vq_present() macro to yield a bool

2019-07-04 Thread Dave Martin
On Thu, Jul 04, 2019 at 02:24:42PM +0200, Paolo Bonzini wrote:
> On 04/07/19 10:20, Marc Zyngier wrote:
> > +KVM, Paolo and Radim,
> > 
> > Guys, do you mind picking this single patch and sending it to Linus?
> > That's the only fix left for 5.2. Alternatively, I can send you a pull
> > request, but it feels overkill.
> 
> Sure, will do.
> 
> Paolo

Thanks all for the quick turnaround!

[...]

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH] KVM: arm64/sve: Fix vq_present() macro to yield a bool

2019-07-04 Thread Dave Martin
On Thu, Jul 04, 2019 at 10:04:08AM +, Zhang, Lei wrote:
> Hi guys,
> 
> I have started up KVM guest os successfully with SVE feature with Dave' patch.
> 
> Tested-by: Zhang Lei 

Thanks for verifying.

It's really your fix, I only wrote a commit message for it :)

[...]

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 06/59] KVM: arm64: nv: Allow userspace to set PSR_MODE_EL2x

2019-07-04 Thread Dave Martin
On Wed, Jul 03, 2019 at 10:21:57AM +0100, Marc Zyngier wrote:
> On 24/06/2019 13:48, Dave Martin wrote:
> > On Fri, Jun 21, 2019 at 02:50:08PM +0100, Marc Zyngier wrote:
> >> On 21/06/2019 14:24, Julien Thierry wrote:
> >>>
> >>>
> >>> On 21/06/2019 10:37, Marc Zyngier wrote:
> >>>> From: Christoffer Dall 
> >>>>
> >>>> We were not allowing userspace to set a more privileged mode for the VCPU
> >>>> than EL1, but we should allow this when nested virtualization is enabled
> >>>> for the VCPU.
> >>>>
> >>>> Signed-off-by: Christoffer Dall 
> >>>> Signed-off-by: Marc Zyngier 
> >>>> ---
> >>>>  arch/arm64/kvm/guest.c | 6 ++
> >>>>  1 file changed, 6 insertions(+)
> >>>>
> >>>> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
> >>>> index 3ae2f82fca46..4c35b5d51e21 100644
> >>>> --- a/arch/arm64/kvm/guest.c
> >>>> +++ b/arch/arm64/kvm/guest.c
> >>>> @@ -37,6 +37,7 @@
> >>>>  #include 
> >>>>  #include 
> >>>>  #include 
> >>>> +#include 
> >>>>  #include 
> >>>>  
> >>>>  #include "trace.h"
> >>>> @@ -194,6 +195,11 @@ static int set_core_reg(struct kvm_vcpu *vcpu, 
> >>>> const struct kvm_one_reg *reg)
> >>>>  if (vcpu_el1_is_32bit(vcpu))
> >>>>  return -EINVAL;
> >>>>  break;
> >>>> +case PSR_MODE_EL2h:
> >>>> +case PSR_MODE_EL2t:
> >>>> +if (vcpu_el1_is_32bit(vcpu) || 
> >>>> !nested_virt_in_use(vcpu))
> >>>
> >>> This condition reads a bit weirdly. Why do we care about anything else
> >>> than !nested_virt_in_use() ?
> >>>
> >>> If nested virt is not in use then obviously we return the error.
> >>>
> >>> If nested virt is in use then why do we care about EL1? Or should this
> >>> test read as "highest_el_is_32bit" ?
> >>
> >> There are multiple things at play here:
> >>
> >> - MODE_EL2x is not a valid 32bit mode
> >> - The architecture forbids nested virt with 32bit EL2
> >>
> >> The code above is a simplification of these two conditions. But
> >> certainly we can do a bit better, as kvm_reset_cpu() doesn't really
> >> check that we don't create a vcpu with both 32bit+NV. These two bits
> >> should really be exclusive.
> > 
> > This code is safe for now because KVM_VCPU_MAX_FEATURES <=
> > KVM_ARM_VCPU_NESTED_VIRT, right, i.e., nested_virt_in_use() cannot be
> > true?
> > 
> > This makes me a little uneasy, but I think that's paranoia talking: we
> > want bisectably, but no sane person should ship with just half of this
> > series.  So I guess this is fine.
> > 
> > We could stick something like
> > 
> > if (WARN_ON(...))
> > return false;
> > 
> > in nested_virt_in_use() and then remove it in the final patch, but it's
> > probably overkill.
> 
> The only case I can imagine something going wrong is if this series is
> only applied halfway, and another series bumps the maximum feature to
> something that includes NV. I guess your suggestion would solve that.

I won't lose sleep over it either way.

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH] KVM: arm64/sve: Fix vq_present() macro to yield a bool

2019-07-04 Thread Dave Martin
On Thu, Jul 04, 2019 at 08:32:52AM +0530, Viresh Kumar wrote:
> On 03-07-19, 18:42, Dave Martin wrote:
> > From: Zhang Lei 
> > 
> > The original implementation of vq_present() relied on aggressive
> > inlining in order for the compiler to know that the code is
> > correct, due to some const-casting issues.  This was causing sparse
> > and clang to complain, while GCC compiled cleanly.
> > 
> > Commit 0c529ff789bc addressed this problem, but since vq_present()
> > is no longer a function, there is now no implicit casting of the
> > returned value to the return type (bool).
> > 
> > In set_sve_vls(), this uncast bit value is compared against a bool,
> > and so may spuriously compare as unequal when both are nonzero.  As
> > a result, KVM may reject valid SVE vector length configurations as
> > invalid, and vice versa.
> > 
> > Fix it by forcing the returned value to a bool.
> > 
> > Signed-off-by: Zhang Lei 
> > Fixes: 0c529ff789bc ("KVM: arm64: Implement vq_present() as a macro")
> > Signed-off-by: Dave Martin  [commit message rewrite]
> > Cc: Viresh Kumar 
> > 
> > ---
> > 
> > Posting this under Zhang Lei's authorship, due to the need to turn this
> > fix around quickly.  The fix is as per the original suggestion [1].
> > 
> > Originally observed with the QEMU KVM SVE support under review:
> > https://lists.gnu.org/archive/html/qemu-devel/2019-06/msg04945.html
> > 
> > Bug reproduced and fix tested on the Arm Fast Model, with
> > http://linux-arm.org/git?p=kvmtool-dm.git;a=shortlog;h=refs/heads/sve/v3/head
> > (After rerunning util/update_headers.sh.)
> > 
> > (the --sve-vls command line argument was removed in v4 of the
> > kvmtool patches).
> > 
> > [1] 
> > http://lists.infradead.org/pipermail/linux-arm-kernel/2019-July/664633.html
> > ---
> >  arch/arm64/kvm/guest.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
> > index c2afa79..dfd6264 100644
> > --- a/arch/arm64/kvm/guest.c
> > +++ b/arch/arm64/kvm/guest.c
> > @@ -208,7 +208,7 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const 
> > struct kvm_one_reg *reg)
> >  
> >  #define vq_word(vq) (((vq) - SVE_VQ_MIN) / 64)
> >  #define vq_mask(vq) ((u64)1 << ((vq) - SVE_VQ_MIN) % 64)
> > -#define vq_present(vqs, vq) ((vqs)[vq_word(vq)] & vq_mask(vq))
> > +#define vq_present(vqs, vq) (!!((vqs)[vq_word(vq)] & vq_mask(vq)))
> >  
> >  static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg 
> > *reg)
> >  {
> 
> It was a really nice bug :)
> 
> Reviewed-by: Viresh Kumar 

Thanks for the quick review!

Maybe it makes sense to write equality comparisons on bools as !x == !y
to be more defensive against this kind of thing.  Anyway, probably best
to leave this as-is while the dust settles.

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH V3] KVM: arm64: Implement vq_present() as a macro

2019-07-03 Thread Dave Martin
On Wed, Jul 03, 2019 at 12:04:11PM +, Zhang, Lei wrote:
> Hi guys,
> 
> I can't start up KVM guest os with SVE feature with your patch.
> The error message is 
> qemu-system-aarch64: kvm_init_vcpu failed: Invalid argument.
> 
> My test enviroment.
> kernel  linux-5.2-rc6
> qemu  [Qemu-devel] [PATCH v2 00/14] target/arm/kvm: enable SVE in guests 
> https://lists.gnu.org/archive/html/qemu-devel/2019-06/msg04945.html
> KVM start up option
> -machine virt,gic-version=host,accel=kvm \
> -cpu host \
> -machine type=virt \
> -nographic \
> -smp 16 \ -m 4096 \
> -drive if=none,file=/root/image.qcow2,id=hd0,format=qcow2 \
> -device virtio-blk-device,drive=hd0 \
> -netdev user,id=mynet0,restrict=off,hostfwd=tcp::38001-:22 \
> -device virtio-net-device,netdev=mynet0 \
> -bios /root/QEMU_EFI.fd
> 
> sve_vq_available function's return value' type is bool.
> But vq_present is macro, so the value is not only TRUE, FALSE but also some 
> numbers.
> So It failed at 
> if (vq_present(vqs, vq) != sve_vq_available(vq)).
> I think it is nessary to make vq_present macro's value only TRUE and FALSE.
> 
> arch/arm64/kvm/guest.c
> static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
>   for (vq = SVE_VQ_MIN; vq <= max_vq; ++vq)
>   if (vq_present(vqs, vq) != sve_vq_available(vq)) // It failed 
> at here.
>   return -EINVAL;
> 
> My patch as follows.
> I have started up KVM guest os successfully with SVE feature with this patch.
> 
> Could you review and merge my patch?

[...]

Thanks for reporting this!  It looks like we didn't realise we dropped
the implicit cast to bool when the result was returned from the original
version of vq_present().

Your fix looks sensible to me.

For the future, see Documentation/process/submitting-patches.rst for
guidance on how to prepare a patch for submission.

However, due to the fact that we're already at -rc7 I've written a
commit message for the patch and reposted [1].  Since the fix is yours,
I'll keep your authorship and S-o-B.

Please retest when you can (though the diff should be the same).

Note, your mail seems to be corrupted, but since the diff is a one-line
fix, I'm pretty confident I decoded it correctly.  If anything looks
wrong, please let me know.

[...]

Cheers
---Dave


[1] [PATCH] KVM: arm64/sve: Fix vq_present() macro to yield a bool
http://lists.infradead.org/pipermail/linux-arm-kernel/2019-July/664745.html


[PATCH] KVM: arm64/sve: Fix vq_present() macro to yield a bool

2019-07-03 Thread Dave Martin
From: Zhang Lei 

The original implementation of vq_present() relied on aggressive
inlining in order for the compiler to know that the code is
correct, due to some const-casting issues.  This was causing sparse
and clang to complain, while GCC compiled cleanly.

Commit 0c529ff789bc addressed this problem, but since vq_present()
is no longer a function, there is now no implicit casting of the
returned value to the return type (bool).

In set_sve_vls(), this uncast bit value is compared against a bool,
and so may spuriously compare as unequal when both are nonzero.  As
a result, KVM may reject valid SVE vector length configurations as
invalid, and vice versa.

Fix it by forcing the returned value to a bool.

Signed-off-by: Zhang Lei 
Fixes: 0c529ff789bc ("KVM: arm64: Implement vq_present() as a macro")
Signed-off-by: Dave Martin  [commit message rewrite]
Cc: Viresh Kumar 

---

Posting this under Zhang Lei's authorship, due to the need to turn this
fix around quickly.  The fix is as per the original suggestion [1].

Originally observed with the QEMU KVM SVE support under review:
https://lists.gnu.org/archive/html/qemu-devel/2019-06/msg04945.html

Bug reproduced and fix tested on the Arm Fast Model, with
http://linux-arm.org/git?p=kvmtool-dm.git;a=shortlog;h=refs/heads/sve/v3/head
(After rerunning util/update_headers.sh.)

(the --sve-vls command line argument was removed in v4 of the
kvmtool patches).

[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2019-July/664633.html
---
 arch/arm64/kvm/guest.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index c2afa79..dfd6264 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -208,7 +208,7 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct 
kvm_one_reg *reg)
 
 #define vq_word(vq) (((vq) - SVE_VQ_MIN) / 64)
 #define vq_mask(vq) ((u64)1 << ((vq) - SVE_VQ_MIN) % 64)
-#define vq_present(vqs, vq) ((vqs)[vq_word(vq)] & vq_mask(vq))
+#define vq_present(vqs, vq) (!!((vqs)[vq_word(vq)] & vq_mask(vq)))
 
 static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
 {
-- 
2.1.4

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 07/59] KVM: arm64: nv: Add EL2 system registers to vcpu context

2019-07-03 Thread Dave Martin
On Wed, Jul 03, 2019 at 01:20:55PM +0100, Marc Zyngier wrote:
> On 24/06/2019 13:54, Dave Martin wrote:
> > On Fri, Jun 21, 2019 at 10:37:51AM +0100, Marc Zyngier wrote:
> >> From: Jintack Lim 
> >>
> >> ARM v8.3 introduces a new bit in the HCR_EL2, which is the NV bit. When
> >> this bit is set, accessing EL2 registers in EL1 traps to EL2. In
> >> addition, executing the following instructions in EL1 will trap to EL2:
> >> tlbi, at, eret, and msr/mrs instructions to access SP_EL1. Most of the
> >> instructions that trap to EL2 with the NV bit were undef at EL1 prior to
> >> ARM v8.3. The only instruction that was not undef is eret.
> >>
> >> This patch sets up a handler for EL2 registers and SP_EL1 register
> >> accesses at EL1. The host hypervisor keeps those register values in
> >> memory, and will emulate their behavior.
> >>
> >> This patch doesn't set the NV bit yet. It will be set in a later patch
> >> once nested virtualization support is completed.
> >>
> >> Signed-off-by: Jintack Lim 
> >> Signed-off-by: Marc Zyngier 
> >> ---
> >>  arch/arm64/include/asm/kvm_host.h | 37 +++-
> >>  arch/arm64/include/asm/sysreg.h   | 50 -
> >>  arch/arm64/kvm/sys_regs.c | 74 ---
> >>  3 files changed, 154 insertions(+), 7 deletions(-)
> >>
> >> diff --git a/arch/arm64/include/asm/kvm_host.h 
> >> b/arch/arm64/include/asm/kvm_host.h
> >> index 4bcd9c1291d5..2d4290d2513a 100644
> >> --- a/arch/arm64/include/asm/kvm_host.h
> >> +++ b/arch/arm64/include/asm/kvm_host.h
> >> @@ -173,12 +173,47 @@ enum vcpu_sysreg {
> >>APGAKEYLO_EL1,
> >>APGAKEYHI_EL1,
> >>  
> >> -  /* 32bit specific registers. Keep them at the end of the range */
> >> +  /* 32bit specific registers. */
> > 
> > Out of interest, why did we originally want these to be at the end?
> > Because they're not at the end any more...
> 
> I seem to remember the original assembly switch code used that property.
> This is a long gone requirement, thankfully.

Ah, right.

> >>DACR32_EL2, /* Domain Access Control Register */
> >>IFSR32_EL2, /* Instruction Fault Status Register */
> >>FPEXC32_EL2,/* Floating-Point Exception Control Register */
> >>DBGVCR32_EL2,   /* Debug Vector Catch Register */
> >>  
> >> +  /* EL2 registers sorted ascending by Op0, Op1, CRn, CRm, Op2 */
> >> +  FIRST_EL2_SYSREG,
> >> +  VPIDR_EL2 = FIRST_EL2_SYSREG,
> >> +  /* Virtualization Processor ID Register */
> >> +  VMPIDR_EL2, /* Virtualization Multiprocessor ID Register */
> >> +  SCTLR_EL2,  /* System Control Register (EL2) */
> >> +  ACTLR_EL2,  /* Auxiliary Control Register (EL2) */
> >> +  HCR_EL2,/* Hypervisor Configuration Register */
> >> +  MDCR_EL2,   /* Monitor Debug Configuration Register (EL2) */
> >> +  CPTR_EL2,   /* Architectural Feature Trap Register (EL2) */
> >> +  HSTR_EL2,   /* Hypervisor System Trap Register */
> >> +  HACR_EL2,   /* Hypervisor Auxiliary Control Register */
> >> +  TTBR0_EL2,  /* Translation Table Base Register 0 (EL2) */
> >> +  TTBR1_EL2,  /* Translation Table Base Register 1 (EL2) */
> >> +  TCR_EL2,/* Translation Control Register (EL2) */
> >> +  VTTBR_EL2,  /* Virtualization Translation Table Base Register */
> >> +  VTCR_EL2,   /* Virtualization Translation Control Register */
> >> +  SPSR_EL2,   /* EL2 saved program status register */
> >> +  ELR_EL2,/* EL2 exception link register */
> >> +  AFSR0_EL2,  /* Auxiliary Fault Status Register 0 (EL2) */
> >> +  AFSR1_EL2,  /* Auxiliary Fault Status Register 1 (EL2) */
> >> +  ESR_EL2,/* Exception Syndrome Register (EL2) */
> >> +  FAR_EL2,/* Hypervisor IPA Fault Address Register */
> >> +  HPFAR_EL2,  /* Hypervisor IPA Fault Address Register */
> >> +  MAIR_EL2,   /* Memory Attribute Indirection Register (EL2) */
> >> +  AMAIR_EL2,  /* Auxiliary Memory Attribute Indirection Register 
> >> (EL2) */
> >> +  VBAR_EL2,   /* Vector Base Address Register (EL2) */
> >> +  RVBAR_EL2,  /* Reset Vector Base Address Register */
> >> +  RMR_EL2,/* Reset Management Register */
> >> +  CONTEXTIDR_EL2, /* Context ID Register (EL2) */
> >> +  TPIDR_EL2,  /* EL2 Software Thread ID Register */

Re: [PATCH 04/59] KVM: arm64: nv: Introduce nested virtualization VCPU feature

2019-07-03 Thread Dave Martin
On Wed, Jul 03, 2019 at 12:53:58PM +0100, Marc Zyngier wrote:
> On 24/06/2019 12:28, Dave Martin wrote:
> > On Fri, Jun 21, 2019 at 10:37:48AM +0100, Marc Zyngier wrote:
> >> From: Christoffer Dall 
> >>
> >> Introduce the feature bit and a primitive that checks if the feature is
> >> set behind a static key check based on the cpus_have_const_cap check.
> >>
> >> Checking nested_virt_in_use() on systems without nested virt enabled
> >> should have neglgible overhead.
> >>
> >> We don't yet allow userspace to actually set this feature.
> >>
> >> Signed-off-by: Christoffer Dall 
> >> Signed-off-by: Marc Zyngier 
> >> ---
> >>  arch/arm/include/asm/kvm_nested.h   |  9 +
> >>  arch/arm64/include/asm/kvm_nested.h | 13 +
> >>  arch/arm64/include/uapi/asm/kvm.h   |  1 +
> >>  3 files changed, 23 insertions(+)
> >>  create mode 100644 arch/arm/include/asm/kvm_nested.h
> >>  create mode 100644 arch/arm64/include/asm/kvm_nested.h
> >>
> >> diff --git a/arch/arm/include/asm/kvm_nested.h 
> >> b/arch/arm/include/asm/kvm_nested.h
> >> new file mode 100644
> >> index ..124ff6445f8f
> >> --- /dev/null
> >> +++ b/arch/arm/include/asm/kvm_nested.h
> >> @@ -0,0 +1,9 @@
> >> +/* SPDX-License-Identifier: GPL-2.0 */
> >> +#ifndef __ARM_KVM_NESTED_H
> >> +#define __ARM_KVM_NESTED_H
> >> +
> >> +#include 
> >> +
> >> +static inline bool nested_virt_in_use(const struct kvm_vcpu *vcpu) { 
> >> return false; }
> >> +
> >> +#endif /* __ARM_KVM_NESTED_H */
> >> diff --git a/arch/arm64/include/asm/kvm_nested.h 
> >> b/arch/arm64/include/asm/kvm_nested.h
> >> new file mode 100644
> >> index ..8a3d121a0b42
> >> --- /dev/null
> >> +++ b/arch/arm64/include/asm/kvm_nested.h
> >> @@ -0,0 +1,13 @@
> >> +/* SPDX-License-Identifier: GPL-2.0 */
> >> +#ifndef __ARM64_KVM_NESTED_H
> >> +#define __ARM64_KVM_NESTED_H
> >> +
> >> +#include 
> >> +
> >> +static inline bool nested_virt_in_use(const struct kvm_vcpu *vcpu)
> >> +{
> >> +  return cpus_have_const_cap(ARM64_HAS_NESTED_VIRT) &&
> >> +  test_bit(KVM_ARM_VCPU_NESTED_VIRT, vcpu->arch.features);
> >> +}
> >> +
> >> +#endif /* __ARM64_KVM_NESTED_H */
> >> diff --git a/arch/arm64/include/uapi/asm/kvm.h 
> >> b/arch/arm64/include/uapi/asm/kvm.h
> >> index d819a3e8b552..563e2a8bae93 100644
> >> --- a/arch/arm64/include/uapi/asm/kvm.h
> >> +++ b/arch/arm64/include/uapi/asm/kvm.h
> >> @@ -106,6 +106,7 @@ struct kvm_regs {
> >>  #define KVM_ARM_VCPU_SVE  4 /* enable SVE for this CPU */
> >>  #define KVM_ARM_VCPU_PTRAUTH_ADDRESS  5 /* VCPU uses address 
> >> authentication */
> >>  #define KVM_ARM_VCPU_PTRAUTH_GENERIC  6 /* VCPU uses generic 
> >> authentication */
> >> +#define KVM_ARM_VCPU_NESTED_VIRT  7 /* Support nested virtualization */
> > 
> > This seems weirdly named:
> > 
> > Isn't the feature we're exposing here really EL2?  In that case, the
> > feature the guest gets with this flag enabled is plain virtualisation,
> > possibly with the option to nest further.
> > 
> > Does the guest also get nested virt (i.e., recursively nested virt from
> > the host's PoV) as a side effect, or would require an explicit extra
> > flag?
> 
> So far, there is no extra flag to describe further nesting, and it
> directly comes from EL2 being emulated. I don't mind renaming this to
> KVM_ARM_VCPU_HAS_EL2, or something similar... Whether we want userspace
> to control the exposure of the nesting capability (i.e. EL2 with
> ARMv8.3-NV) is another question.

Agreed.

KVM_ARM_VCPU_HAS_EL2 seems a reasonable name to me.

If we have have an internal flag in vcpu_arch.flags we could call that
something different (i.e., keep the NESTED_VIRT naming) if it's natural
to do so.

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 04/59] KVM: arm64: nv: Introduce nested virtualization VCPU feature

2019-07-03 Thread Dave Martin
On Wed, Jul 03, 2019 at 12:56:51PM +0100, Marc Zyngier wrote:
> On 24/06/2019 12:43, Dave Martin wrote:
> > On Fri, Jun 21, 2019 at 10:37:48AM +0100, Marc Zyngier wrote:
> >> From: Christoffer Dall 
> >>
> >> Introduce the feature bit and a primitive that checks if the feature is
> >> set behind a static key check based on the cpus_have_const_cap check.
> >>
> >> Checking nested_virt_in_use() on systems without nested virt enabled
> >> should have neglgible overhead.
> >>
> >> We don't yet allow userspace to actually set this feature.
> >>
> >> Signed-off-by: Christoffer Dall 
> >> Signed-off-by: Marc Zyngier 
> >> ---
> > 
> > [...]
> > 
> >> diff --git a/arch/arm64/include/asm/kvm_nested.h 
> >> b/arch/arm64/include/asm/kvm_nested.h
> >> new file mode 100644
> >> index ..8a3d121a0b42
> >> --- /dev/null
> >> +++ b/arch/arm64/include/asm/kvm_nested.h
> >> @@ -0,0 +1,13 @@
> >> +/* SPDX-License-Identifier: GPL-2.0 */
> >> +#ifndef __ARM64_KVM_NESTED_H
> >> +#define __ARM64_KVM_NESTED_H
> >> +
> >> +#include 
> >> +
> >> +static inline bool nested_virt_in_use(const struct kvm_vcpu *vcpu)
> >> +{
> >> +  return cpus_have_const_cap(ARM64_HAS_NESTED_VIRT) &&
> >> +  test_bit(KVM_ARM_VCPU_NESTED_VIRT, vcpu->arch.features);
> >> +}
> > 
> > Also, is it worth having a vcpu->arch.flags flag for this, similarly to
> > SVE and ptrauth?
> 
> What would we expose through this flag?

Nothing new, put possibly more efficient to access.

AFAIK, test_bit() always results in an explicit load, whereas
vcpu->arch.flags is just a variable, which we already access on some hot
paths.  So the compiler can read it once and cache it, with a bit of
luck.

For flags that are fixed after vcpu init, or flags that are only read/
written by the vcpu thread itself, this should work fine.

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 02/59] KVM: arm64: Move __load_guest_stage2 to kvm_mmu.h

2019-07-03 Thread Dave Martin
On Wed, Jul 03, 2019 at 10:30:03AM +0100, Marc Zyngier wrote:
> On 24/06/2019 12:19, Dave Martin wrote:
> > On Fri, Jun 21, 2019 at 10:37:46AM +0100, Marc Zyngier wrote:
> >> Having __load_guest_stage2 in kvm_hyp.h is quickly going to trigger
> >> a circular include problem. In order to avoid this, let's move
> >> it to kvm_mmu.h, where it will be a better fit anyway.
> >>
> >> In the process, drop the __hyp_text annotation, which doesn't help
> >> as the function is marked as __always_inline.
> > 
> > Does GCC always inline things marked __always_inline?
> > 
> > I seem to remember some gotchas in this area, but I may be being
> > paranoid.
> 
> Yes, this is a strong guarantee. Things like static keys rely on that,
> for example.
> 
> > 
> > If this still only called from hyp, I'd be tempted to heep the
> > __hyp_text annotation just to be on the safe side.
> 
> The trouble with that is that re-introduces the circular dependency with
> kvm_hyp.h that this patch is trying to break...

Ah, right.

I guess it's easier to put up with this, then.

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH kvmtool v4 0/8] arm64: Pointer Authentication and SVE support

2019-07-03 Thread Dave Martin
On Wed, Jul 03, 2019 at 10:35:37AM +0100, Will Deacon wrote:
> On Fri, Jun 07, 2019 at 12:26:21PM +0100, Dave Martin wrote:
> > This series, based on kvmtool master [1], implements basic support for
> > pointer authentication and SVE for guests.  This superseded the
> > previous v3 series [2].
> 
> I'd prefer to use the release headers for 5.2, so I've taken the first three
> patches for now, but I'll wait for you to repost once 5.2 is out before I
> take the rest.

Ack 

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 09/59] KVM: arm64: nv: Add nested virt VCPU primitives for vEL2 VCPU state

2019-06-24 Thread Dave Martin
On Fri, Jun 21, 2019 at 10:37:53AM +0100, Marc Zyngier wrote:
> From: Christoffer Dall 
> 
> When running a nested hypervisor we commonly have to figure out if
> the VCPU mode is running in the context of a guest hypervisor or guest
> guest, or just a normal guest.
> 
> Add convenient primitives for this.
> 
> Signed-off-by: Christoffer Dall 
> Signed-off-by: Marc Zyngier 
> ---
>  arch/arm64/include/asm/kvm_emulate.h | 55 
>  1 file changed, 55 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/kvm_emulate.h 
> b/arch/arm64/include/asm/kvm_emulate.h
> index 39ffe41855bc..8f201ea56f6e 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -191,6 +191,61 @@ static inline void vcpu_set_reg(struct kvm_vcpu *vcpu, 
> u8 reg_num,
>   vcpu_gp_regs(vcpu)->regs.regs[reg_num] = val;
>  }
>  
> +static inline bool vcpu_mode_el2_ctxt(const struct kvm_cpu_context *ctxt)
> +{
> + unsigned long cpsr = ctxt->gp_regs.regs.pstate;
> + u32 mode;
> +
> + if (cpsr & PSR_MODE32_BIT)
> + return false;
> +
> + mode = cpsr & PSR_MODE_MASK;
> +
> + return mode == PSR_MODE_EL2h || mode == PSR_MODE_EL2t;

We could also treat PSR_MODE32_BIT and PSR_MODE_MASK as a single field,
similarly as in the next patch, say:

switch (ctxt->gp_regs.regs.pstate & (PSR_MODE32_BIT | PSR_MODE_MASK)) {
case PSR_MODE_EL2h:
case PSR_MODE_EL2t:
return true;
}

return false;

(This is blatant bikeshedding...)

[...]

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 08/59] KVM: arm64: nv: Reset VMPIDR_EL2 and VPIDR_EL2 to sane values

2019-06-24 Thread Dave Martin
On Fri, Jun 21, 2019 at 10:37:52AM +0100, Marc Zyngier wrote:
> The VMPIDR_EL2 and VPIDR_EL2 are architecturally UNKNOWN at reset, but
> let's be nice to a guest hypervisor behaving foolishly and reset these
> to something reasonable anyway.

Why be nice?  Generally we do try to initialise UNKNOWN regs to garbage,
to help trip up badly-written guests.

Cheers
---Dave

> 
> Signed-off-by: Christoffer Dall 
> Signed-off-by: Marc Zyngier 
> ---
>  arch/arm64/kvm/sys_regs.c | 25 +
>  1 file changed, 21 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index e81be6debe07..693dd063c9c2 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -624,7 +624,7 @@ static void reset_amair_el1(struct kvm_vcpu *vcpu, const 
> struct sys_reg_desc *r)
>   vcpu_write_sys_reg(vcpu, amair, AMAIR_EL1);
>  }
>  
> -static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +static u64 compute_reset_mpidr(struct kvm_vcpu *vcpu)
>  {
>   u64 mpidr;
>  
> @@ -638,7 +638,24 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const 
> struct sys_reg_desc *r)
>   mpidr = (vcpu->vcpu_id & 0x0f) << MPIDR_LEVEL_SHIFT(0);
>   mpidr |= ((vcpu->vcpu_id >> 4) & 0xff) << MPIDR_LEVEL_SHIFT(1);
>   mpidr |= ((vcpu->vcpu_id >> 12) & 0xff) << MPIDR_LEVEL_SHIFT(2);
> - vcpu_write_sys_reg(vcpu, (1ULL << 31) | mpidr, MPIDR_EL1);
> + mpidr |= (1ULL << 31);
> +
> + return mpidr;
> +}
> +
> +static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> + vcpu_write_sys_reg(vcpu, compute_reset_mpidr(vcpu), MPIDR_EL1);
> +}
> +
> +static void reset_vmpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> + vcpu_write_sys_reg(vcpu, compute_reset_mpidr(vcpu), VMPIDR_EL2);
> +}
> +
> +static void reset_vpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> + vcpu_write_sys_reg(vcpu, read_cpuid_id(), VPIDR_EL2);
>  }
>  
>  static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> @@ -1668,8 +1685,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>*/
>   { SYS_DESC(SYS_PMCCFILTR_EL0), access_pmu_evtyper, reset_val, 
> PMCCFILTR_EL0, 0 },
>  
> - { SYS_DESC(SYS_VPIDR_EL2), access_rw, reset_val, VPIDR_EL2, 0 },
> - { SYS_DESC(SYS_VMPIDR_EL2), access_rw, reset_val, VMPIDR_EL2, 0 },
> + { SYS_DESC(SYS_VPIDR_EL2), access_rw, reset_vpidr, VPIDR_EL2 },
> + { SYS_DESC(SYS_VMPIDR_EL2), access_rw, reset_vmpidr, VMPIDR_EL2 },
>  
>   { SYS_DESC(SYS_SCTLR_EL2), access_rw, reset_val, SCTLR_EL2, 0 },
>   { SYS_DESC(SYS_ACTLR_EL2), access_rw, reset_val, ACTLR_EL2, 0 },
> -- 
> 2.20.1
> 
> 
> ___
> linux-arm-kernel mailing list
> linux-arm-ker...@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 07/59] KVM: arm64: nv: Add EL2 system registers to vcpu context

2019-06-24 Thread Dave Martin
On Fri, Jun 21, 2019 at 10:37:51AM +0100, Marc Zyngier wrote:
> From: Jintack Lim 
> 
> ARM v8.3 introduces a new bit in the HCR_EL2, which is the NV bit. When
> this bit is set, accessing EL2 registers in EL1 traps to EL2. In
> addition, executing the following instructions in EL1 will trap to EL2:
> tlbi, at, eret, and msr/mrs instructions to access SP_EL1. Most of the
> instructions that trap to EL2 with the NV bit were undef at EL1 prior to
> ARM v8.3. The only instruction that was not undef is eret.
> 
> This patch sets up a handler for EL2 registers and SP_EL1 register
> accesses at EL1. The host hypervisor keeps those register values in
> memory, and will emulate their behavior.
> 
> This patch doesn't set the NV bit yet. It will be set in a later patch
> once nested virtualization support is completed.
> 
> Signed-off-by: Jintack Lim 
> Signed-off-by: Marc Zyngier 
> ---
>  arch/arm64/include/asm/kvm_host.h | 37 +++-
>  arch/arm64/include/asm/sysreg.h   | 50 -
>  arch/arm64/kvm/sys_regs.c | 74 ---
>  3 files changed, 154 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h 
> b/arch/arm64/include/asm/kvm_host.h
> index 4bcd9c1291d5..2d4290d2513a 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -173,12 +173,47 @@ enum vcpu_sysreg {
>   APGAKEYLO_EL1,
>   APGAKEYHI_EL1,
>  
> - /* 32bit specific registers. Keep them at the end of the range */
> + /* 32bit specific registers. */

Out of interest, why did we originally want these to be at the end?
Because they're not at the end any more...

>   DACR32_EL2, /* Domain Access Control Register */
>   IFSR32_EL2, /* Instruction Fault Status Register */
>   FPEXC32_EL2,/* Floating-Point Exception Control Register */
>   DBGVCR32_EL2,   /* Debug Vector Catch Register */
>  
> + /* EL2 registers sorted ascending by Op0, Op1, CRn, CRm, Op2 */
> + FIRST_EL2_SYSREG,
> + VPIDR_EL2 = FIRST_EL2_SYSREG,
> + /* Virtualization Processor ID Register */
> + VMPIDR_EL2, /* Virtualization Multiprocessor ID Register */
> + SCTLR_EL2,  /* System Control Register (EL2) */
> + ACTLR_EL2,  /* Auxiliary Control Register (EL2) */
> + HCR_EL2,/* Hypervisor Configuration Register */
> + MDCR_EL2,   /* Monitor Debug Configuration Register (EL2) */
> + CPTR_EL2,   /* Architectural Feature Trap Register (EL2) */
> + HSTR_EL2,   /* Hypervisor System Trap Register */
> + HACR_EL2,   /* Hypervisor Auxiliary Control Register */
> + TTBR0_EL2,  /* Translation Table Base Register 0 (EL2) */
> + TTBR1_EL2,  /* Translation Table Base Register 1 (EL2) */
> + TCR_EL2,/* Translation Control Register (EL2) */
> + VTTBR_EL2,  /* Virtualization Translation Table Base Register */
> + VTCR_EL2,   /* Virtualization Translation Control Register */
> + SPSR_EL2,   /* EL2 saved program status register */
> + ELR_EL2,/* EL2 exception link register */
> + AFSR0_EL2,  /* Auxiliary Fault Status Register 0 (EL2) */
> + AFSR1_EL2,  /* Auxiliary Fault Status Register 1 (EL2) */
> + ESR_EL2,/* Exception Syndrome Register (EL2) */
> + FAR_EL2,/* Hypervisor IPA Fault Address Register */
> + HPFAR_EL2,  /* Hypervisor IPA Fault Address Register */
> + MAIR_EL2,   /* Memory Attribute Indirection Register (EL2) */
> + AMAIR_EL2,  /* Auxiliary Memory Attribute Indirection Register 
> (EL2) */
> + VBAR_EL2,   /* Vector Base Address Register (EL2) */
> + RVBAR_EL2,  /* Reset Vector Base Address Register */
> + RMR_EL2,/* Reset Management Register */
> + CONTEXTIDR_EL2, /* Context ID Register (EL2) */
> + TPIDR_EL2,  /* EL2 Software Thread ID Register */
> + CNTVOFF_EL2,/* Counter-timer Virtual Offset register */
> + CNTHCTL_EL2,/* Counter-timer Hypervisor Control register */
> + SP_EL2, /* EL2 Stack Pointer */
> +

I wonder whether we could make these conditionally present somehow.  Not
worth worrying about for now to save 200-odd bytes per vcpu though.

[...]

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 06/59] KVM: arm64: nv: Allow userspace to set PSR_MODE_EL2x

2019-06-24 Thread Dave Martin
On Fri, Jun 21, 2019 at 02:50:08PM +0100, Marc Zyngier wrote:
> On 21/06/2019 14:24, Julien Thierry wrote:
> > 
> > 
> > On 21/06/2019 10:37, Marc Zyngier wrote:
> >> From: Christoffer Dall 
> >>
> >> We were not allowing userspace to set a more privileged mode for the VCPU
> >> than EL1, but we should allow this when nested virtualization is enabled
> >> for the VCPU.
> >>
> >> Signed-off-by: Christoffer Dall 
> >> Signed-off-by: Marc Zyngier 
> >> ---
> >>  arch/arm64/kvm/guest.c | 6 ++
> >>  1 file changed, 6 insertions(+)
> >>
> >> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
> >> index 3ae2f82fca46..4c35b5d51e21 100644
> >> --- a/arch/arm64/kvm/guest.c
> >> +++ b/arch/arm64/kvm/guest.c
> >> @@ -37,6 +37,7 @@
> >>  #include 
> >>  #include 
> >>  #include 
> >> +#include 
> >>  #include 
> >>  
> >>  #include "trace.h"
> >> @@ -194,6 +195,11 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const 
> >> struct kvm_one_reg *reg)
> >>if (vcpu_el1_is_32bit(vcpu))
> >>return -EINVAL;
> >>break;
> >> +  case PSR_MODE_EL2h:
> >> +  case PSR_MODE_EL2t:
> >> +  if (vcpu_el1_is_32bit(vcpu) || 
> >> !nested_virt_in_use(vcpu))
> > 
> > This condition reads a bit weirdly. Why do we care about anything else
> > than !nested_virt_in_use() ?
> > 
> > If nested virt is not in use then obviously we return the error.
> > 
> > If nested virt is in use then why do we care about EL1? Or should this
> > test read as "highest_el_is_32bit" ?
> 
> There are multiple things at play here:
> 
> - MODE_EL2x is not a valid 32bit mode
> - The architecture forbids nested virt with 32bit EL2
> 
> The code above is a simplification of these two conditions. But
> certainly we can do a bit better, as kvm_reset_cpu() doesn't really
> check that we don't create a vcpu with both 32bit+NV. These two bits
> should really be exclusive.

This code is safe for now because KVM_VCPU_MAX_FEATURES <=
KVM_ARM_VCPU_NESTED_VIRT, right, i.e., nested_virt_in_use() cannot be
true?

This makes me a little uneasy, but I think that's paranoia talking: we
want bisectably, but no sane person should ship with just half of this
series.  So I guess this is fine.

We could stick something like

if (WARN_ON(...))
return false;

in nested_virt_in_use() and then remove it in the final patch, but it's
probably overkill.

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 05/59] KVM: arm64: nv: Reset VCPU to EL2 registers if VCPU nested virt is set

2019-06-24 Thread Dave Martin
On Fri, Jun 21, 2019 at 10:37:49AM +0100, Marc Zyngier wrote:
> From: Christoffer Dall 
> 
> Reset the VCPU with PSTATE.M = EL2h when the nested virtualization
> feature is enabled on the VCPU.
> 
> Signed-off-by: Christoffer Dall 
> Signed-off-by: Marc Zyngier 
> ---
>  arch/arm64/kvm/reset.c | 7 +++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
> index 1140b4485575..675ca07dbb05 100644
> --- a/arch/arm64/kvm/reset.c
> +++ b/arch/arm64/kvm/reset.c
> @@ -52,6 +52,11 @@ static const struct kvm_regs default_regs_reset = {
>   PSR_F_BIT | PSR_D_BIT),
>  };
>  
> +static const struct kvm_regs default_regs_reset_el2 = {
> + .regs.pstate = (PSR_MODE_EL2h | PSR_A_BIT | PSR_I_BIT |
> + PSR_F_BIT | PSR_D_BIT),
> +};
> +

Is it worth having a #define for the common non-mode bits?  It's a bit
weird for EL2 and EL1 to have indepedent DAIF defaults.

Putting a big block of zeros in the kernel text just to initialise one
register seems overkill.  Now we're adding a third block of zeros,
maybe this is worth refactoring?  We really just need a memset(0)
followed by config-dependent initialisation of regs.pstate AFAICT.

Not a big deal though: this doesn't look like a high risk for
maintainability.

Cheers
---Dave

>  static const struct kvm_regs default_regs_reset32 = {
>   .regs.pstate = (PSR_AA32_MODE_SVC | PSR_AA32_A_BIT |
>   PSR_AA32_I_BIT | PSR_AA32_F_BIT),
> @@ -302,6 +307,8 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
>   if (!cpu_has_32bit_el1())
>   goto out;
>   cpu_reset = _regs_reset32;
> + } else if (test_bit(KVM_ARM_VCPU_NESTED_VIRT, 
> vcpu->arch.features)) {
> + cpu_reset = _regs_reset_el2;
>   } else {
>   cpu_reset = _regs_reset;
>   }
> -- 
> 2.20.1
> 
> 
> ___
> linux-arm-kernel mailing list
> linux-arm-ker...@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 04/59] KVM: arm64: nv: Introduce nested virtualization VCPU feature

2019-06-24 Thread Dave Martin
On Fri, Jun 21, 2019 at 10:37:48AM +0100, Marc Zyngier wrote:
> From: Christoffer Dall 
> 
> Introduce the feature bit and a primitive that checks if the feature is
> set behind a static key check based on the cpus_have_const_cap check.
> 
> Checking nested_virt_in_use() on systems without nested virt enabled
> should have neglgible overhead.
> 
> We don't yet allow userspace to actually set this feature.
> 
> Signed-off-by: Christoffer Dall 
> Signed-off-by: Marc Zyngier 
> ---
>  arch/arm/include/asm/kvm_nested.h   |  9 +
>  arch/arm64/include/asm/kvm_nested.h | 13 +
>  arch/arm64/include/uapi/asm/kvm.h   |  1 +
>  3 files changed, 23 insertions(+)
>  create mode 100644 arch/arm/include/asm/kvm_nested.h
>  create mode 100644 arch/arm64/include/asm/kvm_nested.h
> 
> diff --git a/arch/arm/include/asm/kvm_nested.h 
> b/arch/arm/include/asm/kvm_nested.h
> new file mode 100644
> index ..124ff6445f8f
> --- /dev/null
> +++ b/arch/arm/include/asm/kvm_nested.h
> @@ -0,0 +1,9 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef __ARM_KVM_NESTED_H
> +#define __ARM_KVM_NESTED_H
> +
> +#include 
> +
> +static inline bool nested_virt_in_use(const struct kvm_vcpu *vcpu) { return 
> false; }
> +
> +#endif /* __ARM_KVM_NESTED_H */
> diff --git a/arch/arm64/include/asm/kvm_nested.h 
> b/arch/arm64/include/asm/kvm_nested.h
> new file mode 100644
> index ..8a3d121a0b42
> --- /dev/null
> +++ b/arch/arm64/include/asm/kvm_nested.h
> @@ -0,0 +1,13 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef __ARM64_KVM_NESTED_H
> +#define __ARM64_KVM_NESTED_H
> +
> +#include 
> +
> +static inline bool nested_virt_in_use(const struct kvm_vcpu *vcpu)
> +{
> + return cpus_have_const_cap(ARM64_HAS_NESTED_VIRT) &&
> + test_bit(KVM_ARM_VCPU_NESTED_VIRT, vcpu->arch.features);
> +}
> +
> +#endif /* __ARM64_KVM_NESTED_H */
> diff --git a/arch/arm64/include/uapi/asm/kvm.h 
> b/arch/arm64/include/uapi/asm/kvm.h
> index d819a3e8b552..563e2a8bae93 100644
> --- a/arch/arm64/include/uapi/asm/kvm.h
> +++ b/arch/arm64/include/uapi/asm/kvm.h
> @@ -106,6 +106,7 @@ struct kvm_regs {
>  #define KVM_ARM_VCPU_SVE 4 /* enable SVE for this CPU */
>  #define KVM_ARM_VCPU_PTRAUTH_ADDRESS 5 /* VCPU uses address authentication */
>  #define KVM_ARM_VCPU_PTRAUTH_GENERIC 6 /* VCPU uses generic authentication */
> +#define KVM_ARM_VCPU_NESTED_VIRT 7 /* Support nested virtualization */

This seems weirdly named:

Isn't the feature we're exposing here really EL2?  In that case, the
feature the guest gets with this flag enabled is plain virtualisation,
possibly with the option to nest further.

Does the guest also get nested virt (i.e., recursively nested virt from
the host's PoV) as a side effect, or would require an explicit extra
flag?

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 03/59] arm64: Add ARM64_HAS_NESTED_VIRT cpufeature

2019-06-24 Thread Dave Martin
On Fri, Jun 21, 2019 at 10:37:47AM +0100, Marc Zyngier wrote:
> From: Jintack Lim 
> 
> Add a new ARM64_HAS_NESTED_VIRT feature to indicate that the
> CPU has the ARMv8.3 nested virtualization capability.
> 
> This will be used to support nested virtualization in KVM.
> 
> Signed-off-by: Jintack Lim 
> Signed-off-by: Andre Przywara 
> Signed-off-by: Christoffer Dall 
> Signed-off-by: Marc Zyngier 
> ---
>  .../admin-guide/kernel-parameters.txt |  4 +++
>  arch/arm64/include/asm/cpucaps.h  |  3 ++-
>  arch/arm64/include/asm/sysreg.h   |  1 +
>  arch/arm64/kernel/cpufeature.c| 26 +++
>  4 files changed, 33 insertions(+), 1 deletion(-)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt 
> b/Documentation/admin-guide/kernel-parameters.txt
> index 138f6664b2e2..202bb2115d83 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -2046,6 +2046,10 @@
>   [KVM,ARM] Allow use of GICv4 for direct injection of
>   LPIs.
>  
> + kvm-arm.nested=
> + [KVM,ARM] Allow nested virtualization in KVM/ARM.
> + Default is 0 (disabled)
> +

In light of the discussion on this patch, is it worth making 0 not
guarantee that nested is allowed, rather than guaranteeing to disable
nested?

This would allow the option to be turned into a no-op later once the NV
code is considered mature enough to rip out all the conditionality.

[...]

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 01/59] KVM: arm64: Migrate _elx sysreg accessors to msr_s/mrs_s

2019-06-24 Thread Dave Martin
On Fri, Jun 21, 2019 at 10:37:45AM +0100, Marc Zyngier wrote:
> From: Dave Martin 
> 
> Currently, the {read,write}_sysreg_el*() accessors for accessing
> particular ELs' sysregs in the presence of VHE rely on some local
> hacks and define their system register encodings in a way that is
> inconsistent with the core definitions in .
> 
> As a result, it is necessary to add duplicate definitions for any
> system register that already needs a definition in sysreg.h for
> other reasons.
> 
> This is a bit of a maintenance headache, and the reasons for the
> _el*() accessors working the way they do is a bit historical.
> 
> This patch gets rid of the shadow sysreg definitions in
> , converts the _el*() accessors to use the core
> __msr_s/__mrs_s interface, and converts all call sites to use the
> standard sysreg #define names (i.e., upper case, with SYS_ prefix).
> 
> This patch will conflict heavily anyway, so the opportunity taken
> to clean up some bad whitespace in the context of the changes is
> taken.

FWIW, "opportunity taken ... is taken".

Anway, Ack, thanks to you and Sudeep for keeping this alive.

Cheers
---Dave

> The change exposes a few system registers that have no sysreg.h
> definition, due to msr_s/mrs_s being used in place of msr/mrs:
> additions are made in order to fill in the gaps.
> 
> Signed-off-by: Dave Martin 
> Cc: Catalin Marinas 
> Cc: Christoffer Dall 
> Cc: Mark Rutland 
> Cc: Will Deacon 
> Link: https://www.spinics.net/lists/kvm-arm/msg31717.html
> [Rebased to v4.21-rc1]
> Signed-off-by: Sudeep Holla 
> [Rebased to v5.2-rc5, changelog updates]
> Signed-off-by: Marc Zyngier 
> ---
>  arch/arm/include/asm/kvm_hyp.h   | 13 ++--
>  arch/arm64/include/asm/kvm_emulate.h | 16 ++---
>  arch/arm64/include/asm/kvm_hyp.h | 50 ++-
>  arch/arm64/include/asm/sysreg.h  | 35 ++-
>  arch/arm64/kvm/hyp/switch.c  | 14 ++---
>  arch/arm64/kvm/hyp/sysreg-sr.c   | 78 
>  arch/arm64/kvm/hyp/tlb.c | 12 ++--
>  arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c |  2 +-
>  arch/arm64/kvm/regmap.c  |  4 +-
>  arch/arm64/kvm/sys_regs.c| 56 -
>  virt/kvm/arm/arch_timer.c| 24 
>  11 files changed, 148 insertions(+), 156 deletions(-)
> 
> diff --git a/arch/arm/include/asm/kvm_hyp.h b/arch/arm/include/asm/kvm_hyp.h
> index 87bcd18df8d5..059224fb14db 100644
> --- a/arch/arm/include/asm/kvm_hyp.h
> +++ b/arch/arm/include/asm/kvm_hyp.h
> @@ -93,13 +93,14 @@
>  #define VFP_FPEXC__ACCESS_VFP(FPEXC)
>  
>  /* AArch64 compatibility macros, only for the timer so far */
> -#define read_sysreg_el0(r)   read_sysreg(r##_el0)
> -#define write_sysreg_el0(v, r)   write_sysreg(v, r##_el0)
> +#define read_sysreg_el0(r)   read_sysreg(r##_EL0)
> +#define write_sysreg_el0(v, r)   write_sysreg(v, r##_EL0)
> +
> +#define SYS_CNTP_CTL_EL0 CNTP_CTL
> +#define SYS_CNTP_CVAL_EL0CNTP_CVAL
> +#define SYS_CNTV_CTL_EL0 CNTV_CTL
> +#define SYS_CNTV_CVAL_EL0CNTV_CVAL
>  
> -#define cntp_ctl_el0 CNTP_CTL
> -#define cntp_cval_el0CNTP_CVAL
> -#define cntv_ctl_el0 CNTV_CTL
> -#define cntv_cval_el0CNTV_CVAL
>  #define cntvoff_el2  CNTVOFF
>  #define cnthctl_el2  CNTHCTL
>  
> diff --git a/arch/arm64/include/asm/kvm_emulate.h 
> b/arch/arm64/include/asm/kvm_emulate.h
> index 613427fafff9..39ffe41855bc 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -137,7 +137,7 @@ static inline unsigned long *__vcpu_elr_el1(const struct 
> kvm_vcpu *vcpu)
>  static inline unsigned long vcpu_read_elr_el1(const struct kvm_vcpu *vcpu)
>  {
>   if (vcpu->arch.sysregs_loaded_on_cpu)
> - return read_sysreg_el1(elr);
> + return read_sysreg_el1(SYS_ELR);
>   else
>   return *__vcpu_elr_el1(vcpu);
>  }
> @@ -145,7 +145,7 @@ static inline unsigned long vcpu_read_elr_el1(const 
> struct kvm_vcpu *vcpu)
>  static inline void vcpu_write_elr_el1(const struct kvm_vcpu *vcpu, unsigned 
> long v)
>  {
>   if (vcpu->arch.sysregs_loaded_on_cpu)
> - write_sysreg_el1(v, elr);
> + write_sysreg_el1(v, SYS_ELR);
>   else
>   *__vcpu_elr_el1(vcpu) = v;
>  }
> @@ -197,7 +197,7 @@ static inline unsigned long vcpu_read_spsr(const struct 
> kvm_vcpu *vcpu)
>   return vcpu_read_spsr32(vcpu);
>  
>   if (vcpu->arch.sysregs_loaded_on_cpu

[PATCH v4 REPOST] KVM: arm64: Filter out invalid core register IDs in KVM_GET_REG_LIST

2019-06-12 Thread Dave Martin
Since commit d26c25a9d19b ("arm64: KVM: Tighten guest core register
access from userspace"), KVM_{GET,SET}_ONE_REG rejects register IDs
that do not correspond to a single underlying architectural register.

KVM_GET_REG_LIST was not changed to match however: instead, it
simply yields a list of 32-bit register IDs that together cover the
whole kvm_regs struct.  This means that if userspace tries to use
the resulting list of IDs directly to drive calls to KVM_*_ONE_REG,
some of those calls will now fail.

This was not the intention.  Instead, iterating KVM_*_ONE_REG over
the list of IDs returned by KVM_GET_REG_LIST should be guaranteed
to work.

This patch fixes the problem by splitting validate_core_offset()
into a backend core_reg_size_from_offset() which does all of the
work except for checking that the size field in the register ID
matches, and kvm_arm_copy_reg_indices() and num_core_regs() are
converted to use this to enumerate the valid offsets.

kvm_arm_copy_reg_indices() now also sets the register ID size field
appropriately based on the value returned, so the register ID
supplied to userspace is fully qualified for use with the register
access ioctls.

Cc: sta...@vger.kernel.org
Fixes: d26c25a9d19b ("arm64: KVM: Tighten guest core register access from 
userspace")
Signed-off-by: Dave Martin 
Reviewed-by: Andrew Jones 
Tested-by: Andrew Jones 
---

This is just a repost of [1], with Andrew Jones' reviewer tags added.

[1] [PATCH] KVM: arm64: Filter out invalid core register IDs in KVM_GET_REG_LIST
https://lists.cs.columbia.edu/pipermail/kvmarm/2019-June/036093.html

 arch/arm64/kvm/guest.c | 53 +-
 1 file changed, 40 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 3ae2f82..6527c76 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -70,10 +70,8 @@ static u64 core_reg_offset_from_id(u64 id)
return id & ~(KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK | KVM_REG_ARM_CORE);
 }
 
-static int validate_core_offset(const struct kvm_vcpu *vcpu,
-   const struct kvm_one_reg *reg)
+static int core_reg_size_from_offset(const struct kvm_vcpu *vcpu, u64 off)
 {
-   u64 off = core_reg_offset_from_id(reg->id);
int size;
 
switch (off) {
@@ -103,8 +101,7 @@ static int validate_core_offset(const struct kvm_vcpu *vcpu,
return -EINVAL;
}
 
-   if (KVM_REG_SIZE(reg->id) != size ||
-   !IS_ALIGNED(off, size / sizeof(__u32)))
+   if (!IS_ALIGNED(off, size / sizeof(__u32)))
return -EINVAL;
 
/*
@@ -115,6 +112,21 @@ static int validate_core_offset(const struct kvm_vcpu 
*vcpu,
if (vcpu_has_sve(vcpu) && core_reg_offset_is_vreg(off))
return -EINVAL;
 
+   return size;
+}
+
+static int validate_core_offset(const struct kvm_vcpu *vcpu,
+   const struct kvm_one_reg *reg)
+{
+   u64 off = core_reg_offset_from_id(reg->id);
+   int size = core_reg_size_from_offset(vcpu, off);
+
+   if (size < 0)
+   return -EINVAL;
+
+   if (KVM_REG_SIZE(reg->id) != size)
+   return -EINVAL;
+
return 0;
 }
 
@@ -453,19 +465,34 @@ static int copy_core_reg_indices(const struct kvm_vcpu 
*vcpu,
 {
unsigned int i;
int n = 0;
-   const u64 core_reg = KVM_REG_ARM64 | KVM_REG_SIZE_U64 | 
KVM_REG_ARM_CORE;
 
for (i = 0; i < sizeof(struct kvm_regs) / sizeof(__u32); i++) {
-   /*
-* The KVM_REG_ARM64_SVE regs must be used instead of
-* KVM_REG_ARM_CORE for accessing the FPSIMD V-registers on
-* SVE-enabled vcpus:
-*/
-   if (vcpu_has_sve(vcpu) && core_reg_offset_is_vreg(i))
+   u64 reg = KVM_REG_ARM64 | KVM_REG_ARM_CORE | i;
+   int size = core_reg_size_from_offset(vcpu, i);
+
+   if (size < 0)
+   continue;
+
+   switch (size) {
+   case sizeof(__u32):
+   reg |= KVM_REG_SIZE_U32;
+   break;
+
+   case sizeof(__u64):
+   reg |= KVM_REG_SIZE_U64;
+   break;
+
+   case sizeof(__uint128_t):
+   reg |= KVM_REG_SIZE_U128;
+   break;
+
+   default:
+   WARN_ON(1);
continue;
+   }
 
if (uindices) {
-   if (put_user(core_reg | i, uindices))
+   if (put_user(reg, uindices))
return -EFAULT;
uindices++;
}
-- 
2.1.4

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH V2] KVM: arm64: Implement vq_present() as a macro

2019-06-10 Thread Dave Martin
On Mon, Jun 10, 2019 at 03:20:30PM +0530, Viresh Kumar wrote:
> On 10-06-19, 10:09, Dave Martin wrote:
> > You could drop the extra level of indirection on vqs now.  The only
> > thing it achieves is to enforce the size of the array via type-
> > checkout, but the macro can't easily do that (unless you can think
> > of another way to do it).
> > 
> > Otherwise, looks good.
> 
> Below is what I wrote initially this morning and then moved to the
> current version as I wasn't sure if you would want that :)
> 
> -- 
> viresh
> 
> -8<-
> 
> From be823e68faffc82a6f621c16ce1bd45990d92791 Mon Sep 17 00:00:00 2001
> Message-Id: 
> 
> From: Viresh Kumar 
> Date: Mon, 10 Jun 2019 11:15:17 +0530
> Subject: [PATCH] KVM: arm64: Implement vq_present() as a macro
> 
> This routine is a one-liner and doesn't really need to be function and
> can be implemented as a macro.
> 
> Suggested-by: Dave Martin 
> Signed-off-by: Viresh Kumar 
> ---
>  arch/arm64/kvm/guest.c | 12 +++-
>  1 file changed, 3 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
> index 3ae2f82fca46..ae734fcfd4ea 100644
> --- a/arch/arm64/kvm/guest.c
> +++ b/arch/arm64/kvm/guest.c
> @@ -207,13 +207,7 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const 
> struct kvm_one_reg *reg)
>  
>  #define vq_word(vq) (((vq) - SVE_VQ_MIN) / 64)
>  #define vq_mask(vq) ((u64)1 << ((vq) - SVE_VQ_MIN) % 64)
> -
> -static bool vq_present(
> - const u64 (*const vqs)[KVM_ARM64_SVE_VLS_WORDS],
> - unsigned int vq)
> -{
> - return (*vqs)[vq_word(vq)] & vq_mask(vq);
> -}
> +#define vq_present(vqs, vq) ((vqs)[vq_word(vq)] & vq_mask(vq))
>  
>  static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
>  {
> @@ -258,7 +252,7 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const 
> struct kvm_one_reg *reg)
>  
>   max_vq = 0;
>   for (vq = SVE_VQ_MIN; vq <= SVE_VQ_MAX; ++vq)
> - if (vq_present(, vq))
> + if (vq_present(vqs, vq))
>   max_vq = vq;
>  
>   if (max_vq > sve_vq_from_vl(kvm_sve_max_vl))
> @@ -272,7 +266,7 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const 
> struct kvm_one_reg *reg)
>* maximum:
>*/
>   for (vq = SVE_VQ_MIN; vq <= max_vq; ++vq)
> - if (vq_present(, vq) != sve_vq_available(vq))
> + if (vq_present(vqs, vq) != sve_vq_available(vq))
>   return -EINVAL;

I think I prefer this version:

Reviewed-by: Dave Martin 

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH V2] KVM: arm64: Implement vq_present() as a macro

2019-06-10 Thread Dave Martin
On Mon, Jun 10, 2019 at 11:36:33AM +0530, Viresh Kumar wrote:
> This routine is a one-liner and doesn't really need to be function and
> should be rather implemented as a macro.
> 
> Suggested-by: Dave Martin 
> Signed-off-by: Viresh Kumar 
> ---
> V1->V2:
> - The previous implementation was fixing a compilation error that
>   occurred only with old compilers (from 2015) due to a bug in the
>   compiler itself.
> 
> - Dave suggested to rather implement this as a macro which made more
>   sense.
> 
>  arch/arm64/kvm/guest.c | 8 +---
>  1 file changed, 1 insertion(+), 7 deletions(-)
> 
> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
> index 3ae2f82fca46..a429ed36a6a0 100644
> --- a/arch/arm64/kvm/guest.c
> +++ b/arch/arm64/kvm/guest.c
> @@ -207,13 +207,7 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const 
> struct kvm_one_reg *reg)
>  
>  #define vq_word(vq) (((vq) - SVE_VQ_MIN) / 64)
>  #define vq_mask(vq) ((u64)1 << ((vq) - SVE_VQ_MIN) % 64)
> -
> -static bool vq_present(
> - const u64 (*const vqs)[KVM_ARM64_SVE_VLS_WORDS],
> - unsigned int vq)
> -{
> - return (*vqs)[vq_word(vq)] & vq_mask(vq);
> -}
> +#define vq_present(vqs, vq) ((*(vqs))[vq_word(vq)] & vq_mask(vq))

You could drop the extra level of indirection on vqs now.  The only
thing it achieves is to enforce the size of the array via type-
checkout, but the macro can't easily do that (unless you can think
of another way to do it).

Otherwise, looks good.

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH] KVM: arm64: Drop 'const' from argument of vq_present()

2019-06-07 Thread Dave Martin
On Fri, Jun 07, 2019 at 11:30:37AM +0530, Viresh Kumar wrote:
> On 04-06-19, 10:59, Dave Martin wrote:
> > On Tue, Jun 04, 2019 at 10:13:19AM +0530, Viresh Kumar wrote:
> > > We currently get following compilation warning:
> > > 
> > > arch/arm64/kvm/guest.c: In function 'set_sve_vls':
> > > arch/arm64/kvm/guest.c:262:18: warning: passing argument 1 of 
> > > 'vq_present' from incompatible pointer type
> > > arch/arm64/kvm/guest.c:212:13: note: expected 'const u64 (* const)[8]' 
> > > but argument is of type 'u64 (*)[8]'
> > > 
> > > The argument can't be const, as it is copied at runtime using
> > > copy_from_user(). Drop const from the prototype of vq_present().
> > > 
> > > Fixes: 9033bba4b535 ("KVM: arm64/sve: Add pseudo-register for the guest's 
> > > vector lengths")
> > > Signed-off-by: Viresh Kumar 
> > > ---
> > >  arch/arm64/kvm/guest.c | 2 +-
> > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > 
> > > diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
> > > index 3ae2f82fca46..78f5a4f45e0a 100644
> > > --- a/arch/arm64/kvm/guest.c
> > > +++ b/arch/arm64/kvm/guest.c
> > > @@ -209,7 +209,7 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const 
> > > struct kvm_one_reg *reg)
> > >  #define vq_mask(vq) ((u64)1 << ((vq) - SVE_VQ_MIN) % 64)
> > >  
> > >  static bool vq_present(
> > > - const u64 (*const vqs)[KVM_ARM64_SVE_VLS_WORDS],
> > > + u64 (*const vqs)[KVM_ARM64_SVE_VLS_WORDS],
> > >   unsigned int vq)
> > >  {
> > >   return (*vqs)[vq_word(vq)] & vq_mask(vq);
> > 
> > Ack, but maybe this should just be converted to a macro?
> 
> I will send a patch with that if that's what you want.

I think this would solve the problem and simplify the code a bit at the
same time.

So go for it.

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH kvmtool v4 2/8] update_headers.sh: Cleanly report failure on error

2019-06-07 Thread Dave Martin
If in intermediate step fails, update_headers.sh blindly continues
and may return success status.

To avoid errors going unnoticed when driving this script, exit and
report failure status as soon as something goes wrong.  For good
measure, also fail on expansion of undefined shell variables to aid
future maintainers.

Signed-off-by: Dave Martin 
Reviewed-by: Andre Przywara 
---
 util/update_headers.sh | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/util/update_headers.sh b/util/update_headers.sh
index 4ba1b9f..a7e21b8 100755
--- a/util/update_headers.sh
+++ b/util/update_headers.sh
@@ -7,6 +7,8 @@
 # using the lib/modules/`uname -r`/source link.
 
 
+set -ue
+
 if [ "$#" -ge 1 ]
 then
LINUX_ROOT="$1"
-- 
2.1.4

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH kvmtool v4 1/8] update_headers.sh: Add missing shell quoting

2019-06-07 Thread Dave Martin
update_headers.sh can break if the current working directory has a
funny name or if something odd is passed for LINUX_ROOT.

In the interest of cleanliness, quote where appropriate.

Signed-off-by: Dave Martin 
Reviewed-by: Andre Przywara 
---
 util/update_headers.sh | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/util/update_headers.sh b/util/update_headers.sh
index 2d93646..4ba1b9f 100755
--- a/util/update_headers.sh
+++ b/util/update_headers.sh
@@ -11,17 +11,17 @@ if [ "$#" -ge 1 ]
 then
LINUX_ROOT="$1"
 else
-   LINUX_ROOT=/lib/modules/$(uname -r)/source
+   LINUX_ROOT="/lib/modules/$(uname -r)/source"
 fi
 
-if [ ! -d $LINUX_ROOT/include/uapi/linux ]
+if [ ! -d "$LINUX_ROOT/include/uapi/linux" ]
 then
echo "$LINUX_ROOT does not seem to be valid Linux source tree."
echo "usage: $0 [path-to-Linux-source-tree]"
exit 1
 fi
 
-cp $LINUX_ROOT/include/uapi/linux/kvm.h include/linux
+cp -- "$LINUX_ROOT/include/uapi/linux/kvm.h" include/linux
 
 for arch in arm arm64 mips powerpc x86
 do
@@ -30,6 +30,6 @@ do
arm64) KVMTOOL_PATH=arm/aarch64 ;;
*) KVMTOOL_PATH=$arch ;;
esac
-   cp $LINUX_ROOT/arch/$arch/include/uapi/asm/kvm.h \
-   $KVMTOOL_PATH/include/asm
+   cp -- "$LINUX_ROOT/arch/$arch/include/uapi/asm/kvm.h" \
+   "$KVMTOOL_PATH/include/asm"
 done
-- 
2.1.4

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH kvmtool v4 3/8] update_headers.sh: arm64: Copy sve_context.h if available

2019-06-07 Thread Dave Martin
The SVE KVM support for arm64 includes the additional backend
header  from .

So update this header if it is available.

To avoid creating a sudden dependency on a specific minimum kernel
version, ignore such optional headers if the source kernel tree
doesn't have them.

Signed-off-by: Dave Martin 

---

Changes since v3:

 * [Andre Przywara]: Quote argument to local (it turns out that some
   shells, including dash, require this).

 * [Andre Przywara]: Factor out copying of possibly-absent arch headers
   as optional_arch, for easier reuse later.
---
 util/update_headers.sh | 14 +-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/util/update_headers.sh b/util/update_headers.sh
index a7e21b8..bf87ef6 100755
--- a/util/update_headers.sh
+++ b/util/update_headers.sh
@@ -25,11 +25,23 @@ fi
 
 cp -- "$LINUX_ROOT/include/uapi/linux/kvm.h" include/linux
 
+unset KVMTOOL_PATH
+
+copy_optional_arch () {
+   local src="$LINUX_ROOT/arch/$arch/include/uapi/$1"
+
+   if [ -r "$src" ]
+   then
+   cp -- "$src" "$KVMTOOL_PATH/include/asm/"
+   fi
+}
+
 for arch in arm arm64 mips powerpc x86
 do
case "$arch" in
arm) KVMTOOL_PATH=arm/aarch32 ;;
-   arm64) KVMTOOL_PATH=arm/aarch64 ;;
+   arm64)  KVMTOOL_PATH=arm/aarch64
+   copy_optional_arch asm/sve_context.h ;;
*) KVMTOOL_PATH=$arch ;;
esac
cp -- "$LINUX_ROOT/arch/$arch/include/uapi/asm/kvm.h" \
-- 
2.1.4

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH kvmtool v4 4/8] update_headers: Sync kvm UAPI headers with linux v5.2-rc1

2019-06-07 Thread Dave Martin
Pull in upstream UAPI headers, for subsequent arm64 SVE / ptrauth
support (among other things).

Signed-off-by: Dave Martin 
Reviewed-by: Andre Przywara 
---
 arm/aarch64/include/asm/kvm.h | 43 
 arm/aarch64/include/asm/sve_context.h | 53 +++
 include/linux/kvm.h   | 15 --
 powerpc/include/asm/kvm.h | 48 +++
 x86/include/asm/kvm.h |  1 +
 5 files changed, 158 insertions(+), 2 deletions(-)
 create mode 100644 arm/aarch64/include/asm/sve_context.h

diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
index 97c3478..7b7ac0f 100644
--- a/arm/aarch64/include/asm/kvm.h
+++ b/arm/aarch64/include/asm/kvm.h
@@ -35,6 +35,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #define __KVM_HAVE_GUEST_DEBUG
 #define __KVM_HAVE_IRQ_LINE
@@ -102,6 +103,9 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_EL1_32BIT 1 /* CPU running a 32bit VM */
 #define KVM_ARM_VCPU_PSCI_0_2  2 /* CPU uses PSCI v0.2 */
 #define KVM_ARM_VCPU_PMU_V33 /* Support guest PMUv3 */
+#define KVM_ARM_VCPU_SVE   4 /* enable SVE for this CPU */
+#define KVM_ARM_VCPU_PTRAUTH_ADDRESS   5 /* VCPU uses address authentication */
+#define KVM_ARM_VCPU_PTRAUTH_GENERIC   6 /* VCPU uses generic authentication */
 
 struct kvm_vcpu_init {
__u32 target;
@@ -226,6 +230,45 @@ struct kvm_vcpu_events {
 KVM_REG_ARM_FW | ((r) & 0x))
 #define KVM_REG_ARM_PSCI_VERSION   KVM_REG_ARM_FW_REG(0)
 
+/* SVE registers */
+#define KVM_REG_ARM64_SVE  (0x15 << KVM_REG_ARM_COPROC_SHIFT)
+
+/* Z- and P-regs occupy blocks at the following offsets within this range: */
+#define KVM_REG_ARM64_SVE_ZREG_BASE0
+#define KVM_REG_ARM64_SVE_PREG_BASE0x400
+#define KVM_REG_ARM64_SVE_FFR_BASE 0x600
+
+#define KVM_ARM64_SVE_NUM_ZREGS__SVE_NUM_ZREGS
+#define KVM_ARM64_SVE_NUM_PREGS__SVE_NUM_PREGS
+
+#define KVM_ARM64_SVE_MAX_SLICES   32
+
+#define KVM_REG_ARM64_SVE_ZREG(n, i)   \
+   (KVM_REG_ARM64 | KVM_REG_ARM64_SVE | KVM_REG_ARM64_SVE_ZREG_BASE | \
+KVM_REG_SIZE_U2048 |   \
+(((n) & (KVM_ARM64_SVE_NUM_ZREGS - 1)) << 5) | \
+((i) & (KVM_ARM64_SVE_MAX_SLICES - 1)))
+
+#define KVM_REG_ARM64_SVE_PREG(n, i)   \
+   (KVM_REG_ARM64 | KVM_REG_ARM64_SVE | KVM_REG_ARM64_SVE_PREG_BASE | \
+KVM_REG_SIZE_U256 |\
+(((n) & (KVM_ARM64_SVE_NUM_PREGS - 1)) << 5) | \
+((i) & (KVM_ARM64_SVE_MAX_SLICES - 1)))
+
+#define KVM_REG_ARM64_SVE_FFR(i)   \
+   (KVM_REG_ARM64 | KVM_REG_ARM64_SVE | KVM_REG_ARM64_SVE_FFR_BASE | \
+KVM_REG_SIZE_U256 |\
+((i) & (KVM_ARM64_SVE_MAX_SLICES - 1)))
+
+#define KVM_ARM64_SVE_VQ_MIN __SVE_VQ_MIN
+#define KVM_ARM64_SVE_VQ_MAX __SVE_VQ_MAX
+
+/* Vector lengths pseudo-register: */
+#define KVM_REG_ARM64_SVE_VLS  (KVM_REG_ARM64 | KVM_REG_ARM64_SVE | \
+KVM_REG_SIZE_U512 | 0x)
+#define KVM_ARM64_SVE_VLS_WORDS\
+   ((KVM_ARM64_SVE_VQ_MAX - KVM_ARM64_SVE_VQ_MIN) / 64 + 1)
+
 /* Device Control API: ARM VGIC */
 #define KVM_DEV_ARM_VGIC_GRP_ADDR  0
 #define KVM_DEV_ARM_VGIC_GRP_DIST_REGS 1
diff --git a/arm/aarch64/include/asm/sve_context.h 
b/arm/aarch64/include/asm/sve_context.h
new file mode 100644
index 000..754ab75
--- /dev/null
+++ b/arm/aarch64/include/asm/sve_context.h
@@ -0,0 +1,53 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/* Copyright (C) 2017-2018 ARM Limited */
+
+/*
+ * For use by other UAPI headers only.
+ * Do not make direct use of header or its definitions.
+ */
+
+#ifndef _UAPI__ASM_SVE_CONTEXT_H
+#define _UAPI__ASM_SVE_CONTEXT_H
+
+#include 
+
+#define __SVE_VQ_BYTES 16  /* number of bytes per quadword */
+
+#define __SVE_VQ_MIN   1
+#define __SVE_VQ_MAX   512
+
+#define __SVE_VL_MIN   (__SVE_VQ_MIN * __SVE_VQ_BYTES)
+#define __SVE_VL_MAX   (__SVE_VQ_MAX * __SVE_VQ_BYTES)
+
+#define __SVE_NUM_ZREGS32
+#define __SVE_NUM_PREGS16
+
+#define __sve_vl_valid(vl) \
+   ((vl) % __SVE_VQ_BYTES == 0 &&  \
+(vl) >= __SVE_VL_MIN &&\
+(vl) <= __SVE_VL_MAX)
+
+#define __sve_vq_from_vl(vl)   ((vl) / __SVE_VQ_BYTES)
+#define __sve_vl_from_vq(vq)   ((vq) * __SVE_VQ_BYTES)
+
+#define __SVE_ZREG_SIZE(vq)((__u32)(vq) * __SVE_VQ_BYTES)
+#define __SVE_PREG_SIZE(vq)((__u32)(vq) * (__SVE_VQ_BYTES / 8))
+#define __SVE_FFR_SIZE(vq) 

[PATCH kvmtool v4 6/8] KVM: arm/arm64: Back out ptrauth command-line arguments

2019-06-07 Thread Dave Martin
Will says that the command-line arguments for controlling optional
vcpu features are superfluous: we don't attempt to support
migration, and this isn't QEMU.

So, remove the command-line arguments and just default pointer auth
to on if supported.

Signed-off-by: Dave Martin 

---

Changes since v3:

 * New patch.  This should probably be folded into the previous one.
---
 arm/aarch64/include/kvm/kvm-config-arch.h |  6 +-
 arm/include/arm-common/kvm-config-arch.h  |  2 --
 arm/kvm-cpu.c | 19 ---
 3 files changed, 5 insertions(+), 22 deletions(-)

diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h 
b/arm/aarch64/include/kvm/kvm-config-arch.h
index 0279b13..04be43d 100644
--- a/arm/aarch64/include/kvm/kvm-config-arch.h
+++ b/arm/aarch64/include/kvm/kvm-config-arch.h
@@ -8,11 +8,7 @@
"Create PMUv3 device"), \
OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed, \
"Specify random seed for Kernel Address Space " \
-   "Layout Randomization (KASLR)"),\
-   OPT_BOOLEAN('\0', "enable-ptrauth", &(cfg)->enable_ptrauth, \
-   "Enables pointer authentication"),  \
-   OPT_BOOLEAN('\0', "disable-ptrauth", &(cfg)->disable_ptrauth,   \
-   "Disables pointer authentication"),
+   "Layout Randomization (KASLR)"),
 
 #include "arm-common/kvm-config-arch.h"
 
diff --git a/arm/include/arm-common/kvm-config-arch.h 
b/arm/include/arm-common/kvm-config-arch.h
index 1b4287d..5734c46 100644
--- a/arm/include/arm-common/kvm-config-arch.h
+++ b/arm/include/arm-common/kvm-config-arch.h
@@ -10,8 +10,6 @@ struct kvm_config_arch {
boolaarch32_guest;
boolhas_pmuv3;
u64 kaslr_seed;
-   boolenable_ptrauth;
-   booldisable_ptrauth;
enum irqchip_type irqchip;
u64 fw_addr;
 };
diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
index acd1d5f..fff8494 100644
--- a/arm/kvm-cpu.c
+++ b/arm/kvm-cpu.c
@@ -68,16 +68,9 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned 
long cpu_id)
vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2);
}
 
-   /* Check Pointer Authentication command line arguments. */
-   if (kvm->cfg.arch.enable_ptrauth && kvm->cfg.arch.disable_ptrauth)
-   die("Both enable-ptrauth and disable-ptrauth option cannot be 
present");
-   /*
-* Always enable Pointer Authentication if system supports
-* this extension unless disable-ptrauth option is present.
-*/
+   /* Enable pointer authentication if available */
if (kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_ADDRESS) &&
-   kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_GENERIC) &&
-   !kvm->cfg.arch.disable_ptrauth)
+   kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_GENERIC))
vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
 
/*
@@ -118,12 +111,8 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, 
unsigned long cpu_id)
die("Unable to find matching target");
}
 
-   if (err || target->init(vcpu)) {
-   if (kvm->cfg.arch.enable_ptrauth)
-   die("Unable to initialise vcpu with pointer 
authentication feature");
-   else
-   die("Unable to initialise vcpu");
-   }
+   if (err || target->init(vcpu))
+   die("Unable to initialise vcpu");
 
coalesced_offset = ioctl(kvm->sys_fd, KVM_CHECK_EXTENSION,
 KVM_CAP_COALESCED_MMIO);
-- 
2.1.4

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH kvmtool v4 5/8] KVM: arm/arm64: Add a vcpu feature for pointer authentication

2019-06-07 Thread Dave Martin
From: Amit Daniel Kachhap 

This patch adds a runtime capabality for KVM tool to enable Arm64 8.3
Pointer Authentication in guest kernel. Two vcpu features
KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] are supplied together to enable
Pointer Authentication in KVM guest after checking the capability.

Command line options --enable-ptrauth and --disable-ptrauth are added
to use this feature. However, if those options are not provided then
also this feature is enabled if host supports this capability.

Signed-off-by: Amit Daniel Kachhap 
Signed-off-by: Dave Martin  [merge new kernel heaers]
---
 arm/aarch32/include/kvm/kvm-cpu-arch.h|  2 ++
 arm/aarch64/include/kvm/kvm-config-arch.h |  6 +-
 arm/aarch64/include/kvm/kvm-cpu-arch.h|  3 +++
 arm/include/arm-common/kvm-config-arch.h  |  2 ++
 arm/kvm-cpu.c | 20 ++--
 5 files changed, 30 insertions(+), 3 deletions(-)

diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h 
b/arm/aarch32/include/kvm/kvm-cpu-arch.h
index d28ea67..3ec6f03 100644
--- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
@@ -13,4 +13,6 @@
 #define ARM_CPU_ID 0, 0, 0
 #define ARM_CPU_ID_MPIDR   5
 
+#define ARM_VCPU_PTRAUTH_FEATURE   0
+
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h 
b/arm/aarch64/include/kvm/kvm-config-arch.h
index 04be43d..0279b13 100644
--- a/arm/aarch64/include/kvm/kvm-config-arch.h
+++ b/arm/aarch64/include/kvm/kvm-config-arch.h
@@ -8,7 +8,11 @@
"Create PMUv3 device"), \
OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed, \
"Specify random seed for Kernel Address Space " \
-   "Layout Randomization (KASLR)"),
+   "Layout Randomization (KASLR)"),\
+   OPT_BOOLEAN('\0', "enable-ptrauth", &(cfg)->enable_ptrauth, \
+   "Enables pointer authentication"),  \
+   OPT_BOOLEAN('\0', "disable-ptrauth", &(cfg)->disable_ptrauth,   \
+   "Disables pointer authentication"),
 
 #include "arm-common/kvm-config-arch.h"
 
diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h 
b/arm/aarch64/include/kvm/kvm-cpu-arch.h
index a9d8563..9fa99fb 100644
--- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
@@ -17,4 +17,7 @@
 #define ARM_CPU_CTRL   3, 0, 1, 0
 #define ARM_CPU_CTRL_SCTLR_EL1 0
 
+#define ARM_VCPU_PTRAUTH_FEATURE   ((1UL << KVM_ARM_VCPU_PTRAUTH_ADDRESS) \
+   | (1UL << KVM_ARM_VCPU_PTRAUTH_GENERIC))
+
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/include/arm-common/kvm-config-arch.h 
b/arm/include/arm-common/kvm-config-arch.h
index 5734c46..1b4287d 100644
--- a/arm/include/arm-common/kvm-config-arch.h
+++ b/arm/include/arm-common/kvm-config-arch.h
@@ -10,6 +10,8 @@ struct kvm_config_arch {
boolaarch32_guest;
boolhas_pmuv3;
u64 kaslr_seed;
+   boolenable_ptrauth;
+   booldisable_ptrauth;
enum irqchip_type irqchip;
u64 fw_addr;
 };
diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
index 7780251..acd1d5f 100644
--- a/arm/kvm-cpu.c
+++ b/arm/kvm-cpu.c
@@ -68,6 +68,18 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned 
long cpu_id)
vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2);
}
 
+   /* Check Pointer Authentication command line arguments. */
+   if (kvm->cfg.arch.enable_ptrauth && kvm->cfg.arch.disable_ptrauth)
+   die("Both enable-ptrauth and disable-ptrauth option cannot be 
present");
+   /*
+* Always enable Pointer Authentication if system supports
+* this extension unless disable-ptrauth option is present.
+*/
+   if (kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_ADDRESS) &&
+   kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_GENERIC) &&
+   !kvm->cfg.arch.disable_ptrauth)
+   vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
+
/*
 * If the preferred target ioctl is successful then
 * use preferred target else try each and every target type
@@ -106,8 +118,12 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, 
unsigned long cpu_id)
die("Unable to find matching target");
}
 
-   if (err || target->init(vcpu))
-   die("Unable to initialise vcpu");
+   if (err || target->init(vcpu)) {
+   if (kvm->cfg.arch.enable_ptrauth)
+   die("Unable

[PATCH kvmtool v4 8/8] arm64: Add SVE support

2019-06-07 Thread Dave Martin
This patch enables the Scalable Vector Extension for the guest when
the host supports it.

This requires use of the new KVM_ARM_VCPU_FINALIZE ioctl before the
vcpu is runnable, so a new hook kvm_cpu__configure_features() is
added to provide an appropriate place to do this work.

Signed-off-by: Dave Martin 

---

Changes since v3:

 * Drop command-line options and simply default SVE to on where
   supported.
---
 arm/aarch32/include/kvm/kvm-cpu-arch.h |  4 
 arm/aarch64/include/kvm/kvm-cpu-arch.h |  1 +
 arm/aarch64/kvm-cpu.c  | 18 ++
 arm/kvm-cpu.c  |  3 +++
 4 files changed, 26 insertions(+)

diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h 
b/arm/aarch32/include/kvm/kvm-cpu-arch.h
index 01983f0..780e0e2 100644
--- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
@@ -15,5 +15,9 @@
 
 static inline void kvm_cpu__select_features(struct kvm *kvm,
struct kvm_vcpu_init *init) { }
+static inline int kvm_cpu__configure_features(struct kvm_cpu *vcpu)
+{
+   return 0;
+}
 
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h 
b/arm/aarch64/include/kvm/kvm-cpu-arch.h
index e6875fc..8dfb82e 100644
--- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
@@ -18,5 +18,6 @@
 #define ARM_CPU_CTRL_SCTLR_EL1 0
 
 void kvm_cpu__select_features(struct kvm *kvm, struct kvm_vcpu_init *init);
+int kvm_cpu__configure_features(struct kvm_cpu *vcpu);
 
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/aarch64/kvm-cpu.c b/arm/aarch64/kvm-cpu.c
index 8c29a21..9f3e858 100644
--- a/arm/aarch64/kvm-cpu.c
+++ b/arm/aarch64/kvm-cpu.c
@@ -136,6 +136,24 @@ void kvm_cpu__select_features(struct kvm *kvm, struct 
kvm_vcpu_init *init)
init->features[0] |= 1UL << KVM_ARM_VCPU_PTRAUTH_ADDRESS;
init->features[0] |= 1UL << KVM_ARM_VCPU_PTRAUTH_GENERIC;
}
+
+   /* Enable SVE if available */
+   if (kvm__supports_extension(kvm, KVM_CAP_ARM_SVE))
+   init->features[0] |= 1UL << KVM_ARM_VCPU_SVE;
+}
+
+int kvm_cpu__configure_features(struct kvm_cpu *vcpu)
+{
+   if (kvm__supports_extension(vcpu->kvm, KVM_CAP_ARM_SVE)) {
+   int feature = KVM_ARM_VCPU_SVE;
+
+   if (ioctl(vcpu->vcpu_fd, KVM_ARM_VCPU_FINALIZE, )) {
+   pr_err("KVM_ARM_VCPU_FINALIZE: %s", strerror(errno));
+   return -1;
+   }
+   }
+
+   return 0;
 }
 
 void kvm_cpu__reset_vcpu(struct kvm_cpu *vcpu)
diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
index 1652f6f..554414f 100644
--- a/arm/kvm-cpu.c
+++ b/arm/kvm-cpu.c
@@ -124,6 +124,9 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, 
unsigned long cpu_id)
vcpu->cpu_compatible= target->compatible;
vcpu->is_running= true;
 
+   if (kvm_cpu__configure_features(vcpu))
+   die("Unable to configure requested vcpu features");
+
return vcpu;
 }
 
-- 
2.1.4

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH kvmtool v4 7/8] arm/arm64: Factor out ptrauth vcpu feature setup

2019-06-07 Thread Dave Martin
In the interest of readability, factor out the vcpu feature setup
for ptrauth into a separate function.

Also, because aarch32 doesn't have this feature or the related
command line options anyway, move the actual code into aarch64/.

Since ARM_VCPU_PTRAUTH_FEATURE is only there to make the ptrauth
feature setup code compile on arm, it is no longer needed: inline
and remove it.

Signed-off-by: Dave Martin 
---
 arm/aarch32/include/kvm/kvm-cpu-arch.h |  3 ++-
 arm/aarch64/include/kvm/kvm-cpu-arch.h |  3 +--
 arm/aarch64/kvm-cpu.c  | 10 ++
 arm/kvm-cpu.c  |  5 +
 4 files changed, 14 insertions(+), 7 deletions(-)

diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h 
b/arm/aarch32/include/kvm/kvm-cpu-arch.h
index 3ec6f03..01983f0 100644
--- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
@@ -13,6 +13,7 @@
 #define ARM_CPU_ID 0, 0, 0
 #define ARM_CPU_ID_MPIDR   5
 
-#define ARM_VCPU_PTRAUTH_FEATURE   0
+static inline void kvm_cpu__select_features(struct kvm *kvm,
+   struct kvm_vcpu_init *init) { }
 
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h 
b/arm/aarch64/include/kvm/kvm-cpu-arch.h
index 9fa99fb..e6875fc 100644
--- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
@@ -17,7 +17,6 @@
 #define ARM_CPU_CTRL   3, 0, 1, 0
 #define ARM_CPU_CTRL_SCTLR_EL1 0
 
-#define ARM_VCPU_PTRAUTH_FEATURE   ((1UL << KVM_ARM_VCPU_PTRAUTH_ADDRESS) \
-   | (1UL << KVM_ARM_VCPU_PTRAUTH_GENERIC))
+void kvm_cpu__select_features(struct kvm *kvm, struct kvm_vcpu_init *init);
 
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/aarch64/kvm-cpu.c b/arm/aarch64/kvm-cpu.c
index 0aaefaf..8c29a21 100644
--- a/arm/aarch64/kvm-cpu.c
+++ b/arm/aarch64/kvm-cpu.c
@@ -128,6 +128,16 @@ static void reset_vcpu_aarch64(struct kvm_cpu *vcpu)
}
 }
 
+void kvm_cpu__select_features(struct kvm *kvm, struct kvm_vcpu_init *init)
+{
+   /* Enable pointer authentication if available */
+   if (kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_ADDRESS) &&
+   kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_GENERIC)) {
+   init->features[0] |= 1UL << KVM_ARM_VCPU_PTRAUTH_ADDRESS;
+   init->features[0] |= 1UL << KVM_ARM_VCPU_PTRAUTH_GENERIC;
+   }
+}
+
 void kvm_cpu__reset_vcpu(struct kvm_cpu *vcpu)
 {
if (vcpu->kvm->cfg.arch.aarch32_guest)
diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
index fff8494..1652f6f 100644
--- a/arm/kvm-cpu.c
+++ b/arm/kvm-cpu.c
@@ -68,10 +68,7 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned 
long cpu_id)
vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2);
}
 
-   /* Enable pointer authentication if available */
-   if (kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_ADDRESS) &&
-   kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_GENERIC))
-   vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
+   kvm_cpu__select_features(kvm, _init);
 
/*
 * If the preferred target ioctl is successful then
-- 
2.1.4

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH kvmtool v4 0/8] arm64: Pointer Authentication and SVE support

2019-06-07 Thread Dave Martin
This series, based on kvmtool master [1], implements basic support for
pointer authentication and SVE for guests.  This superseded the
previous v3 series [2].

A git tree is also available [3].

For pointer auth, I include Amit's v10 patch [4], with some additional
refactoring to sit nicely alongside SVE, and some cosmetic / diagnostic
tidyups discussed during review on-list.  (I've kept the extra changes
separate for easier review, but they could be folded if desired.)

[Maintainer note: I'd like Amit to comment on my changes on top of his
pointer auth patch so that that can be folded together as appropriate.]


This series has been tested against Linux v5.2-rc1.

After discussion with Will, the command-line options for controlling
pointer auth and SVE support have all been dropped, since they are not
useful to normal users: instead, we just default to the best
configuration that the host supports.  There's always the option to add
this functionality back in some more appropriate form later, if someone
has a use for it.

See the individual patches for other minor changes.

[1] 
git://git.kernel.org/pub/scm/linux/kernel/git/will/kvmtool.git master
https://git.kernel.org/pub/scm/linux/kernel/git/will/kvmtool.git/log/
eaeaf60808d6 ("virtio/blk: Avoid taking pointer to packed struct")

[2] [PATCH kvmtool v3 0/9] arm64: Pointer Authentication and SVE support
https://lists.cs.columbia.edu/pipermail/kvmarm/2019-May/036050.html

[3] [PATCH v10 3/5] KVM: arm64: Add userspace flag to enable pointer 
authentication
https://lore.kernel.org/linux-arm-kernel/1555994558-26349-6-git-send-email-amit.kach...@arm.com/

[4]
git://linux-arm.org/kvmtool-dm.git sve/v4/head
http://linux-arm.org/git?p=kvmtool-dm.git;a=shortlog;h=refs/heads/sve/v4/head


Amit Daniel Kachhap (1):
  KVM: arm/arm64: Add a vcpu feature for pointer authentication

Dave Martin (7):
  update_headers.sh: Add missing shell quoting
  update_headers.sh: Cleanly report failure on error
  update_headers.sh: arm64: Copy sve_context.h if available
  update_headers: Sync kvm UAPI headers with linux v5.2-rc1
  KVM: arm/arm64: Back out ptrauth command-line arguments
  arm/arm64: Factor out ptrauth vcpu feature setup
  arm64: Add SVE support

 arm/aarch32/include/kvm/kvm-cpu-arch.h |  7 +
 arm/aarch64/include/asm/kvm.h  | 43 +++
 arm/aarch64/include/asm/sve_context.h  | 53 ++
 arm/aarch64/include/kvm/kvm-cpu-arch.h |  3 ++
 arm/aarch64/kvm-cpu.c  | 28 ++
 arm/kvm-cpu.c  |  5 
 include/linux/kvm.h| 15 --
 powerpc/include/asm/kvm.h  | 48 ++
 util/update_headers.sh | 26 +
 x86/include/asm/kvm.h  |  1 +
 10 files changed, 221 insertions(+), 8 deletions(-)
 create mode 100644 arm/aarch64/include/asm/sve_context.h

-- 
2.1.4

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 2/2] KVM: arm/arm64: vgic: Fix irq refcount leak in kvm_vgic_set_owner()

2019-06-06 Thread Dave Martin
On Thu, Jun 06, 2019 at 01:06:33PM +0100, Marc Zyngier wrote:
> On 06/06/2019 11:58, Dave Martin wrote:
> > kvm_vgic_set_owner() leaks a reference on the vgic_irq descriptor,
> > which does not seem to match up with any vgic_put_irq() that I can
> > find.
> > 
> > Since the irq pointer is not passed out and the caller must anyway
> > subsequently use vgic_get_irq() when is wants a pointer, it is not
> > clear why we should have a dangling refcount here.
> > 
> > The refcount is still needed inside kvm_vgic_set_owner() to prevent
> > the vgic_irq struct from disappearing while while it is
> > manipulated.
> > 
> > So, keep it vgic_get_irq() here, but add the matching
> > vgic_put_irq() before returning.
> > 
> > unreferenced object 0x800b6365ab80 (size 128):
> >   comm "qemu-system-aar", pid 14414, jiffies 4300822606 (age 84.436s)
> >   hex dump (first 32 bytes):
> > 00 00 00 00 00 00 00 00 b0 e1 e0 38 00 00 ff ff  ...8
> > b0 e1 e0 38 00 00 ff ff 78 e6 ad dd 0a 80 ff ff  ...8x...
> >   backtrace:
> > [<a08b80e2>] kmem_cache_alloc+0x178/0x208
> > [<114591cb>] vgic_add_lpi.part.5+0x34/0x190
> > [<ec1425ae>] vgic_its_cmd_handle_mapi+0x320/0x348
> > [<935c5c32>] vgic_its_process_commands.part.14+0x350/0x8b8
> > [<dc256d2c>] vgic_mmio_write_its_cwriter+0x78/0x98
> > [<00008659acd2>] dispatch_mmio_write+0xd4/0x120
> > 
> > [...]
> > 
> > Cc: Christoffer Dall 
> > Fixes: c6ccd30e0de3 ("KVM: arm/arm64: Introduce an allocator for in-kernel 
> > irq lines")
> > Signed-off-by: Dave Martin 
> > 
> > ---
> > 
> > Based on the limited testing I've done so far, the patch _appears_ to
> > fix the bug.
> > 
> > However, I still don't understand which the bug is intermittent, or why
> > the arch_timer or pmu (the only apparent users of kvm_vgic_set_owner())
> > are claiming an LPI in the first place.
> > 
> > So there may be other bugs in the mix, or I may have misunderstood
> > something...
> 
> Yeah, this doesn't make much sense. Both timer and PMU are using PPIs,
> which are not refcounted, so this vgic_put_irq() is effectively a NOP.
> It doesn't invalidate the patch itself, it is just that I seriously
> doubt it fixes anything.
> 
> LPIs do not use the owner field so far, so we must have another get/put
> mismatch somewhere.

No argument from me.

As I say, this change _appeared_ to make this leak go away, but I
couldn't understand why, and didn't kick it very thoroughly.  So it
may well be a red herring.

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH 1/2] KVM: arm/arm64: vgic: Fix kvm_device leak in vgic_its_destroy

2019-06-06 Thread Dave Martin
kvm_device->destroy() seems to be supposed to free its kvm_device
struct, but vgic_its_destroy() is not currently doing this,
resulting in a memory leak, resulting in kmemleak reports such as
the following:

unreferenced object 0x800aeddfe280 (size 128):
  comm "qemu-system-aar", pid 13799, jiffies 4299827317 (age 1569.844s)
  [...]
  backtrace:
[<a08b80e2>] kmem_cache_alloc+0x178/0x208
[<dcad2bd3>] kvm_vm_ioctl+0x350/0xbc0

Fix it.

Cc: Andre Przywara 
Fixes: 1085fdc68c60 ("KVM: arm64: vgic-its: Introduce new KVM ITS device")
Signed-off-by: Dave Martin 

---

This was observed with native qemu on ThunderX2, on a merge of v5.1 with
kvmarm/next commit 9eecfc22e0bf ("KVM: arm64: Fix ptrauth ID register
masking logic").  This may not be a new regression, though.

My qemu invocation was:

$ qemu-system-aarch64 -machine virt,accel=kvm,gic_version=3 -cpu host \
-smp 4 -nographic \
-drive id=vblock,file=block.qcow2,format=qcow2,if=none \
-device virtio-blk-device,drive=vblock \
-kernel Image -append 'root=/dev/vda1 ro'
---
 virt/kvm/arm/vgic/vgic-its.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/virt/kvm/arm/vgic/vgic-its.c b/virt/kvm/arm/vgic/vgic-its.c
index 44ceaccb..8c9fe83 100644
--- a/virt/kvm/arm/vgic/vgic-its.c
+++ b/virt/kvm/arm/vgic/vgic-its.c
@@ -1734,6 +1734,7 @@ static void vgic_its_destroy(struct kvm_device *kvm_dev)
 
mutex_unlock(>its_lock);
kfree(its);
+   kfree(kvm_dev);/* alloc by kvm_ioctl_create_device, free by .destroy */
 }
 
 static int vgic_its_has_attr_regs(struct kvm_device *dev,
-- 
2.1.4

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH 2/2] KVM: arm/arm64: vgic: Fix irq refcount leak in kvm_vgic_set_owner()

2019-06-06 Thread Dave Martin
kvm_vgic_set_owner() leaks a reference on the vgic_irq descriptor,
which does not seem to match up with any vgic_put_irq() that I can
find.

Since the irq pointer is not passed out and the caller must anyway
subsequently use vgic_get_irq() when is wants a pointer, it is not
clear why we should have a dangling refcount here.

The refcount is still needed inside kvm_vgic_set_owner() to prevent
the vgic_irq struct from disappearing while while it is
manipulated.

So, keep it vgic_get_irq() here, but add the matching
vgic_put_irq() before returning.

unreferenced object 0x800b6365ab80 (size 128):
  comm "qemu-system-aar", pid 14414, jiffies 4300822606 (age 84.436s)
  hex dump (first 32 bytes):
00 00 00 00 00 00 00 00 b0 e1 e0 38 00 00 ff ff  ...8
b0 e1 e0 38 00 00 ff ff 78 e6 ad dd 0a 80 ff ff  ...8x...
  backtrace:
[<a08b80e2>] kmem_cache_alloc+0x178/0x208
[<114591cb>] vgic_add_lpi.part.5+0x34/0x190
[<ec1425ae>] vgic_its_cmd_handle_mapi+0x320/0x348
[<935c5c32>] vgic_its_process_commands.part.14+0x350/0x8b8
[<dc256d2c>] vgic_mmio_write_its_cwriter+0x78/0x98
[<8659acd2>] dispatch_mmio_write+0xd4/0x120

[...]

Cc: Christoffer Dall 
Fixes: c6ccd30e0de3 ("KVM: arm/arm64: Introduce an allocator for in-kernel irq 
lines")
Signed-off-by: Dave Martin 

---

Based on the limited testing I've done so far, the patch _appears_ to
fix the bug.

However, I still don't understand which the bug is intermittent, or why
the arch_timer or pmu (the only apparent users of kvm_vgic_set_owner())
are claiming an LPI in the first place.

So there may be other bugs in the mix, or I may have misunderstood
something...

The bug (and fix) were observed with native qemu on ThunderX2, on a
merge of v5.1 with kvmarm/next commit 9eecfc22e0bf ("KVM: arm64: Fix
ptrauth ID register masking logic").

My qemu invocation was:

$ qemu-system-aarch64 -machine virt,accel=kvm,gic_version=3 -cpu host \
-smp 4 -nographic \
-drive id=vblock,file=block.qcow2,format=qcow2,if=none \
-device virtio-blk-device,drive=vblock \
-kernel Image -append 'root=/dev/vda1 ro'
---
 virt/kvm/arm/vgic/vgic.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
index 191decc..930319c 100644
--- a/virt/kvm/arm/vgic/vgic.c
+++ b/virt/kvm/arm/vgic/vgic.c
@@ -599,6 +599,7 @@ int kvm_vgic_set_owner(struct kvm_vcpu *vcpu, unsigned int 
intid, void *owner)
else
irq->owner = owner;
raw_spin_unlock_irqrestore(>irq_lock, flags);
+   vgic_put_irq(vcpu->kvm, irq);
 
return ret;
 }
-- 
2.1.4

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH 0/2] KVM: arm/arm64: vgic: A couple of memory leak fixes

2019-06-06 Thread Dave Martin
While using kmemleak to verify that the KVM SVE series wasn't
contributing any new memory leaks, I hit a couple of existing leaks to
do with vGIC irqs and the vGIC ITS that appear to have been there for
a while.

See the individual patches for details.

I'm not familiar with the affected code, so I may have overlooked
something.

Tested with qemu on ThunderX2.

Dave Martin (2):
  KVM: arm/arm64: vgic: Fix kvm_device leak in vgic_its_destroy
  KVM: arm/arm64: vgic: Fix irq refcount leak in kvm_vgic_set_owner()

 virt/kvm/arm/vgic/vgic-its.c | 1 +
 virt/kvm/arm/vgic/vgic.c | 1 +
 2 files changed, 2 insertions(+)

-- 
2.1.4

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH] KVM: arm64: Drop 'const' from argument of vq_present()

2019-06-04 Thread Dave Martin
On Tue, Jun 04, 2019 at 10:13:19AM +0530, Viresh Kumar wrote:
> We currently get following compilation warning:
> 
> arch/arm64/kvm/guest.c: In function 'set_sve_vls':
> arch/arm64/kvm/guest.c:262:18: warning: passing argument 1 of 'vq_present' 
> from incompatible pointer type
> arch/arm64/kvm/guest.c:212:13: note: expected 'const u64 (* const)[8]' but 
> argument is of type 'u64 (*)[8]'
> 
> The argument can't be const, as it is copied at runtime using
> copy_from_user(). Drop const from the prototype of vq_present().
> 
> Fixes: 9033bba4b535 ("KVM: arm64/sve: Add pseudo-register for the guest's 
> vector lengths")
> Signed-off-by: Viresh Kumar 
> ---
>  arch/arm64/kvm/guest.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
> index 3ae2f82fca46..78f5a4f45e0a 100644
> --- a/arch/arm64/kvm/guest.c
> +++ b/arch/arm64/kvm/guest.c
> @@ -209,7 +209,7 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const 
> struct kvm_one_reg *reg)
>  #define vq_mask(vq) ((u64)1 << ((vq) - SVE_VQ_MIN) % 64)
>  
>  static bool vq_present(
> - const u64 (*const vqs)[KVM_ARM64_SVE_VLS_WORDS],
> + u64 (*const vqs)[KVM_ARM64_SVE_VLS_WORDS],
>   unsigned int vq)
>  {
>   return (*vqs)[vq_word(vq)] & vq_mask(vq);

Ack, but maybe this should just be converted to a macro?

It already feels a bit like overkill for this to be a function.

Cheers
---Dave


Re: [PATCH] KVM: arm64: Drop 'const' from argument of vq_present()

2019-06-04 Thread Dave Martin
On Tue, Jun 04, 2019 at 03:01:53PM +0530, Viresh Kumar wrote:
> On 04-06-19, 10:26, Dave Martin wrote:
> > I'm in two minds about whether this is worth fixing, but if you want to
> > post a patch to remove the extra const (or convert vq_present() to a
> > macro), I'll take a look at it.
> 
> This patch already does what you are asking for (remove the extra
> const), isn't it ?

Yes, sorry -- I didn't scroll back far enough.

> I looked at my textbook (The C programming Language, By Brian W.
> Kernighan and Dennis M. Ritchie.) and it says:
> 
> "
> The const declaration can also be used with array arguments, to
> indicate that the function does not change that array:
> 
> int strlen(const char[]);
> "
> 
> and so this patch isn't necessary for sure.

This is an array to which a pointer argument points, not an array
argument.  I think that's how we hit the constified double-indirection
problem.

Cheers
---Dave


Re: [PATCH] KVM: arm64: Filter out invalid core register IDs in KVM_GET_REG_LIST

2019-06-04 Thread Dave Martin
On Tue, Jun 04, 2019 at 11:23:01AM +0200, Andrew Jones wrote:
> On Mon, Jun 03, 2019 at 05:52:07PM +0100, Dave Martin wrote:
> > Since commit d26c25a9d19b ("arm64: KVM: Tighten guest core register
> > access from userspace"), KVM_{GET,SET}_ONE_REG rejects register IDs
> > that do not correspond to a single underlying architectural register.
> > 
> > KVM_GET_REG_LIST was not changed to match however: instead, it
> > simply yields a list of 32-bit register IDs that together cover the
> > whole kvm_regs struct.  This means that if userspace tries to use
> > the resulting list of IDs directly to drive calls to KVM_*_ONE_REG,
> > some of those calls will now fail.
> > 
> > This was not the intention.  Instead, iterating KVM_*_ONE_REG over
> > the list of IDs returned by KVM_GET_REG_LIST should be guaranteed
> > to work.
> > 
> > This patch fixes the problem by splitting validate_core_offset()
> > into a backend core_reg_size_from_offset() which does all of the
> > work except for checking that the size field in the register ID
> > matches, and kvm_arm_copy_reg_indices() and num_core_regs() are
> > converted to use this to enumerate the valid offsets.
> > 
> > kvm_arm_copy_reg_indices() now also sets the register ID size field
> > appropriately based on the value returned, so the register ID
> > supplied to userspace is fully qualified for use with the register
> > access ioctls.
> 
> Ah yes, I've seen this issue, but hadn't gotten around to fixing it.
> 
> > 
> > Cc: sta...@vger.kernel.org
> > Fixes: d26c25a9d19b ("arm64: KVM: Tighten guest core register access from 
> > userspace")
> > Signed-off-by: Dave Martin 
> > 
> > ---
> > 
> > Changes since v3:
> 
> Hmm, I didn't see a v1-v3.

Looks like I didn't mark v3 as such when posting [1], but this has been
knocking around for a while.  It was rather low-priority and I hadn't
got around to testing it previously...


[1] [PATCH] KVM: arm64: Filter out invalid core register IDs in KVM_GET_REG_LIST
https://lists.cs.columbia.edu/pipermail/kvmarm/2019-April/035417.html

> > 
> >  * Rebased onto v5.2-rc1.
> > 
> >  * Tested with qemu by migrating from one qemu instance to another on
> >ThunderX2.
> 
> One of the reasons I was slow to fix this is because QEMU doesn't care
> about the core registers when it uses KVM_GET_REG_LIST. It just completely
> skips all core reg indices, so it never finds out that they're invalid.
> And kvmtool doesn't use KVM_GET_REG_LIST at all. But it's certainly good
> to fix this.

[...]

> Reviewed-by: Andrew Jones 
> 
> I've also tested this using a kvm selftests test I wrote. I haven't posted
> that test yet because it needs some cleanup and I planned on getting back
> to that when getting back to fixing this issue. Anyway, before this patch
> every other 64-bit core reg index is invalid (because its indexing 32-bits
> but claiming a size of 64), all fp regs are invalid, and we were even
> providing a couple indices that mapped to struct padding. After this patch
> everything is right with the world.
> 
> Tested-by: Andrew Jones 

Thanks
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH] KVM: arm64: Drop 'const' from argument of vq_present()

2019-06-04 Thread Dave Martin
On Tue, Jun 04, 2019 at 02:25:45PM +0530, Viresh Kumar wrote:
> On 04-06-19, 09:43, Catalin Marinas wrote:
> > On Tue, Jun 04, 2019 at 10:13:19AM +0530, Viresh Kumar wrote:
> > > We currently get following compilation warning:
> > > 
> > > arch/arm64/kvm/guest.c: In function 'set_sve_vls':
> > > arch/arm64/kvm/guest.c:262:18: warning: passing argument 1 of 
> > > 'vq_present' from incompatible pointer type
> > > arch/arm64/kvm/guest.c:212:13: note: expected 'const u64 (* const)[8]' 
> > > but argument is of type 'u64 (*)[8]'
> > 
> > Since the vq_present() function does not modify the vqs array, I don't
> > understand why this warning. Compiler bug?
> 
> Probably yes. Also marking array argument to functions as const is a
> right thing to do, to declare that the function wouldn't change the
> array values.
> 
> I tried a recent toolchain and this doesn't happen anymore.
> 
> Sorry for the noise.

Sparse is already warning about this, but I had dismissed it as a false
positive.

I think this is an instance of disallowing implicit conversions of the
form

T ** -> T const **

because this allows a const pointer to be silently de-consted, e.g.:

static const T bar;

void foo(T const **p)
{
*p = 
}

T *baz(void)
{
T *q; 
foo();
return q;
}


I _suspect_ that what's going on here is that the compiler is
eliminating a level of indirection during inlining (i.e. converting
pass-by-reference to direct access, which is precisely what I wanted
to happen).  This removes the potentially invalid behaviour as a
side-effect.

This relies on the compiler optimising / analysing the code
aggressively enough though.

So, I don't have a problem with dropping the extra extra const, e.g.:

static bool vq_present(
u64 (*const vqs)[KVM_ARM64_SVE_VLS_WORDS],
unsigned int vq)

Since this function is static and only used very locally, I don't see a
big risk: the only reason for the extra const was to check that
vq_present() doesn't modify vqs when it shouldn't.  But it's a trivial
function, and the intent is pretty clear without the extra type
modifier.


I'm in two minds about whether this is worth fixing, but if you want to
post a patch to remove the extra const (or convert vq_present() to a
macro), I'll take a look at it.

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH] KVM: arm64: Filter out invalid core register IDs in KVM_GET_REG_LIST

2019-06-03 Thread Dave Martin
Since commit d26c25a9d19b ("arm64: KVM: Tighten guest core register
access from userspace"), KVM_{GET,SET}_ONE_REG rejects register IDs
that do not correspond to a single underlying architectural register.

KVM_GET_REG_LIST was not changed to match however: instead, it
simply yields a list of 32-bit register IDs that together cover the
whole kvm_regs struct.  This means that if userspace tries to use
the resulting list of IDs directly to drive calls to KVM_*_ONE_REG,
some of those calls will now fail.

This was not the intention.  Instead, iterating KVM_*_ONE_REG over
the list of IDs returned by KVM_GET_REG_LIST should be guaranteed
to work.

This patch fixes the problem by splitting validate_core_offset()
into a backend core_reg_size_from_offset() which does all of the
work except for checking that the size field in the register ID
matches, and kvm_arm_copy_reg_indices() and num_core_regs() are
converted to use this to enumerate the valid offsets.

kvm_arm_copy_reg_indices() now also sets the register ID size field
appropriately based on the value returned, so the register ID
supplied to userspace is fully qualified for use with the register
access ioctls.

Cc: sta...@vger.kernel.org
Fixes: d26c25a9d19b ("arm64: KVM: Tighten guest core register access from 
userspace")
Signed-off-by: Dave Martin 

---

Changes since v3:

 * Rebased onto v5.2-rc1.

 * Tested with qemu by migrating from one qemu instance to another on
   ThunderX2.

---
 arch/arm64/kvm/guest.c | 53 +-
 1 file changed, 40 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 3ae2f82..6527c76 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -70,10 +70,8 @@ static u64 core_reg_offset_from_id(u64 id)
return id & ~(KVM_REG_ARCH_MASK | KVM_REG_SIZE_MASK | KVM_REG_ARM_CORE);
 }
 
-static int validate_core_offset(const struct kvm_vcpu *vcpu,
-   const struct kvm_one_reg *reg)
+static int core_reg_size_from_offset(const struct kvm_vcpu *vcpu, u64 off)
 {
-   u64 off = core_reg_offset_from_id(reg->id);
int size;
 
switch (off) {
@@ -103,8 +101,7 @@ static int validate_core_offset(const struct kvm_vcpu *vcpu,
return -EINVAL;
}
 
-   if (KVM_REG_SIZE(reg->id) != size ||
-   !IS_ALIGNED(off, size / sizeof(__u32)))
+   if (!IS_ALIGNED(off, size / sizeof(__u32)))
return -EINVAL;
 
/*
@@ -115,6 +112,21 @@ static int validate_core_offset(const struct kvm_vcpu 
*vcpu,
if (vcpu_has_sve(vcpu) && core_reg_offset_is_vreg(off))
return -EINVAL;
 
+   return size;
+}
+
+static int validate_core_offset(const struct kvm_vcpu *vcpu,
+   const struct kvm_one_reg *reg)
+{
+   u64 off = core_reg_offset_from_id(reg->id);
+   int size = core_reg_size_from_offset(vcpu, off);
+
+   if (size < 0)
+   return -EINVAL;
+
+   if (KVM_REG_SIZE(reg->id) != size)
+   return -EINVAL;
+
return 0;
 }
 
@@ -453,19 +465,34 @@ static int copy_core_reg_indices(const struct kvm_vcpu 
*vcpu,
 {
unsigned int i;
int n = 0;
-   const u64 core_reg = KVM_REG_ARM64 | KVM_REG_SIZE_U64 | 
KVM_REG_ARM_CORE;
 
for (i = 0; i < sizeof(struct kvm_regs) / sizeof(__u32); i++) {
-   /*
-* The KVM_REG_ARM64_SVE regs must be used instead of
-* KVM_REG_ARM_CORE for accessing the FPSIMD V-registers on
-* SVE-enabled vcpus:
-*/
-   if (vcpu_has_sve(vcpu) && core_reg_offset_is_vreg(i))
+   u64 reg = KVM_REG_ARM64 | KVM_REG_ARM_CORE | i;
+   int size = core_reg_size_from_offset(vcpu, i);
+
+   if (size < 0)
+   continue;
+
+   switch (size) {
+   case sizeof(__u32):
+   reg |= KVM_REG_SIZE_U32;
+   break;
+
+   case sizeof(__u64):
+   reg |= KVM_REG_SIZE_U64;
+   break;
+
+   case sizeof(__uint128_t):
+   reg |= KVM_REG_SIZE_U128;
+   break;
+
+   default:
+   WARN_ON(1);
continue;
+   }
 
if (uindices) {
-   if (put_user(core_reg | i, uindices))
+   if (put_user(reg, uindices))
return -EFAULT;
uindices++;
}
-- 
2.1.4

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH kvmtool v3 5/9] KVM: arm/arm64: Add a vcpu feature for pointer authentication

2019-06-03 Thread Dave Martin
On Mon, Jun 03, 2019 at 03:03:48PM +0100, Andre Przywara wrote:
> On Mon, 3 Jun 2019 12:23:03 +0100
> Dave Martin  wrote:
> 
> Hi Dave,
> 
> > On Fri, May 31, 2019 at 06:04:16PM +0100, Andre Przywara wrote:
> > > On Thu, 30 May 2019 16:13:10 +0100
> > > Dave Martin  wrote:
> > >   
> > > > From: Amit Daniel Kachhap 
> > > > 
> > > > This patch adds a runtime capabality for KVM tool to enable Arm64 8.3
> > > > Pointer Authentication in guest kernel. Two vcpu features
> > > > KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] are supplied together to enable
> > > > Pointer Authentication in KVM guest after checking the capability.
> > > > 
> > > > Command line options --enable-ptrauth and --disable-ptrauth are added
> > > > to use this feature. However, if those options are not provided then
> > > > also this feature is enabled if host supports this capability.  
> > > 
> > > I don't really get the purpose of two options, I think that's quite
> > > confusing. Should the first one either be dropped at all or called
> > > something with "force"?
> > > 
> > > I guess the idea is to fail if pointer auth isn't available, but the
> > > option is supplied?
> > > 
> > > Or maybe have one option with parameters?
> > > --ptrauth[,=enable,=disable]  
> > 
> > So, I was following two principles here, either or both of which may be
> > bogus:
> > 
> > 1) There should be a way to determine whether KVM turns a given feature
> > on or off (instead of magically defaulting to something).
> > 
> > 2) To a first approaximation, kvmtool should allow each major KVM ABI
> > feature to be exercised.
> > 
> > 3) By default, kvmtool should offer the maximum feature set possible to
> > the guest.
> > 
> > 
> > (3) is well established, but (1) and (2) may be open to question?
> > 
> > If we hold to both principles, it makes sense to have options
> > functionally equivalent to what I suggested (where KVM provides the
> > control in the first place), but there may be more convenient ways
> > to respell the options.
> > 
> > If we really can't decide, maybe it's better to drop the options
> > altogether until we have a real use case.
> 
> In general I prefer the lack of a *need* for options over tuneability, but my 
> concern is not so much exposing this knob, but more how it's done ...
> 
> > I've found the options very useful for testing and debugging on the SVE
> > side, but I can't comment on ptrauth.  Maybe someone else has a view?
> 
> Given that kvmtool was designed as a hacker tool, I find it quite useful to 
> play around with those setting. I just have my gripes with those 
> enable/disable pair, which are two related, but actually separate options, 
> both polluting the command line options space and also being confusing to the 
> user.
> I would be much happier if we would have one option per feature and a 
> parameter: "--ptrauth={enable,disable}". Omitting the option altogether 
> defaults to "enabled-if-available". Specifying it will force it on or off, 
> accompanied by an error message if either(?) if not possible. This would also 
> remove the need for the somewhat awkward "don't enable both" check.
> It would also more easily allow a common parser, to be used by both ptrauth 
> and SVE, for instance.
> We could even introduce an explicit "default" parameter value, just in case 
> people want to spell this case out.
> 
> What do you think about this?

Happy to do something like that, though it looks like the decision to
drop the options altogether may preempt that...

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH kvmtool v3 5/9] KVM: arm/arm64: Add a vcpu feature for pointer authentication

2019-06-03 Thread Dave Martin
On Mon, Jun 03, 2019 at 03:07:06PM +0100, Will Deacon wrote:
> On Mon, Jun 03, 2019 at 12:23:03PM +0100, Dave Martin wrote:
> > On Fri, May 31, 2019 at 06:04:16PM +0100, Andre Przywara wrote:
> > > On Thu, 30 May 2019 16:13:10 +0100
> > > Dave Martin  wrote:
> > > 
> > > > From: Amit Daniel Kachhap 
> > > > 
> > > > This patch adds a runtime capabality for KVM tool to enable Arm64 8.3
> > > > Pointer Authentication in guest kernel. Two vcpu features
> > > > KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] are supplied together to enable
> > > > Pointer Authentication in KVM guest after checking the capability.
> > > > 
> > > > Command line options --enable-ptrauth and --disable-ptrauth are added
> > > > to use this feature. However, if those options are not provided then
> > > > also this feature is enabled if host supports this capability.
> > > 
> > > I don't really get the purpose of two options, I think that's quite
> > > confusing. Should the first one either be dropped at all or called
> > > something with "force"?
> > > 
> > > I guess the idea is to fail if pointer auth isn't available, but the
> > > option is supplied?
> > > 
> > > Or maybe have one option with parameters?
> > > --ptrauth[,=enable,=disable]
> > 
> > So, I was following two principles here, either or both of which may be
> > bogus:
> > 
> > 1) There should be a way to determine whether KVM turns a given feature
> > on or off (instead of magically defaulting to something).
> > 
> > 2) To a first approaximation, kvmtool should allow each major KVM ABI
> > feature to be exercised.
> > 
> > 3) By default, kvmtool should offer the maximum feature set possible to
> > the guest.
> > 
> > 
> > (3) is well established, but (1) and (2) may be open to question?
> > 
> > If we hold to both principles, it makes sense to have options
> > functionally equivalent to what I suggested (where KVM provides the
> > control in the first place), but there may be more convenient ways
> > to respell the options.
> > 
> > If we really can't decide, maybe it's better to drop the options
> > altogether until we have a real use case.
> > 
> > I've found the options very useful for testing and debugging on the SVE
> > side, but I can't comment on ptrauth.  Maybe someone else has a view?
> 
> I'd prefer to drop them, to be honest. Whilst they may have been useful
> during SVE development, it's not clear to me that they will continue to
> be as useful now that things should be settling down. It's probably useful
> to print out any features that we've explicitly enabled (or failed to
> enable), but I'd stop there for the time being.

I don't have a strong view on this.

I'm happy to respin dropping the command line options and defaulting
everthing to on: for hacking purposes, it's easy to keep a local branch.

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH kvmtool v3 5/9] KVM: arm/arm64: Add a vcpu feature for pointer authentication

2019-06-03 Thread Dave Martin
@Peter, do you have an opinion on this (below) ?

On Thu, May 30, 2019 at 04:13:10PM +0100, Dave Martin wrote:
> From: Amit Daniel Kachhap 
> 
> This patch adds a runtime capabality for KVM tool to enable Arm64 8.3
> Pointer Authentication in guest kernel. Two vcpu features
> KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] are supplied together to enable
> Pointer Authentication in KVM guest after checking the capability.
> 
> Command line options --enable-ptrauth and --disable-ptrauth are added
> to use this feature. However, if those options are not provided then
> also this feature is enabled if host supports this capability.
> 
> The macros defined in the headers are not in sync and should be replaced
> from the upstream.
> 
> Signed-off-by: Amit Daniel Kachhap 
> Signed-off-by: Dave Martin  [merge new kernel heaers]
> ---
>  arm/aarch32/include/kvm/kvm-cpu-arch.h|  2 ++
>  arm/aarch64/include/kvm/kvm-config-arch.h |  6 +-
>  arm/aarch64/include/kvm/kvm-cpu-arch.h|  3 +++
>  arm/include/arm-common/kvm-config-arch.h  |  2 ++
>  arm/kvm-cpu.c | 20 ++--
>  5 files changed, 30 insertions(+), 3 deletions(-)
> 
> diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h 
> b/arm/aarch32/include/kvm/kvm-cpu-arch.h
> index d28ea67..3ec6f03 100644
> --- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
> +++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
> @@ -13,4 +13,6 @@
>  #define ARM_CPU_ID   0, 0, 0
>  #define ARM_CPU_ID_MPIDR 5
>  
> +#define ARM_VCPU_PTRAUTH_FEATURE 0
> +
>  #endif /* KVM__KVM_CPU_ARCH_H */
> diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h 
> b/arm/aarch64/include/kvm/kvm-config-arch.h
> index 04be43d..0279b13 100644
> --- a/arm/aarch64/include/kvm/kvm-config-arch.h
> +++ b/arm/aarch64/include/kvm/kvm-config-arch.h
> @@ -8,7 +8,11 @@
>   "Create PMUv3 device"), \
>   OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed, \
>   "Specify random seed for Kernel Address Space " \
> - "Layout Randomization (KASLR)"),
> + "Layout Randomization (KASLR)"),\
> + OPT_BOOLEAN('\0', "enable-ptrauth", &(cfg)->enable_ptrauth, \
> + "Enables pointer authentication"),  \
> + OPT_BOOLEAN('\0', "disable-ptrauth", &(cfg)->disable_ptrauth,   \
> + "Disables pointer authentication"),
>  
>  #include "arm-common/kvm-config-arch.h"
>  
> diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h 
> b/arm/aarch64/include/kvm/kvm-cpu-arch.h
> index a9d8563..9fa99fb 100644
> --- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
> +++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
> @@ -17,4 +17,7 @@
>  #define ARM_CPU_CTRL 3, 0, 1, 0
>  #define ARM_CPU_CTRL_SCTLR_EL1   0
>  
> +#define ARM_VCPU_PTRAUTH_FEATURE ((1UL << KVM_ARM_VCPU_PTRAUTH_ADDRESS) \
> + | (1UL << KVM_ARM_VCPU_PTRAUTH_GENERIC))
> +
>  #endif /* KVM__KVM_CPU_ARCH_H */
> diff --git a/arm/include/arm-common/kvm-config-arch.h 
> b/arm/include/arm-common/kvm-config-arch.h
> index 5734c46..1b4287d 100644
> --- a/arm/include/arm-common/kvm-config-arch.h
> +++ b/arm/include/arm-common/kvm-config-arch.h
> @@ -10,6 +10,8 @@ struct kvm_config_arch {
>   boolaarch32_guest;
>   boolhas_pmuv3;
>   u64 kaslr_seed;
> + boolenable_ptrauth;
> + booldisable_ptrauth;
>   enum irqchip_type irqchip;
>   u64 fw_addr;
>  };
> diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
> index 7780251..acd1d5f 100644
> --- a/arm/kvm-cpu.c
> +++ b/arm/kvm-cpu.c
> @@ -68,6 +68,18 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, 
> unsigned long cpu_id)
>   vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2);
>   }
>  
> + /* Check Pointer Authentication command line arguments. */
> + if (kvm->cfg.arch.enable_ptrauth && kvm->cfg.arch.disable_ptrauth)
> + die("Both enable-ptrauth and disable-ptrauth option cannot be 
> present");
> + /*
> +  * Always enable Pointer Authentication if system supports
> +  * this extension unless disable-ptrauth option is present.
> +  */
> + if (kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_ADDRESS) &&
> + kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_GENERIC) &&
> + !kvm->cfg.arch.disable_ptrauth)
> + 

Re: [PATCH kvmtool v3 5/9] KVM: arm/arm64: Add a vcpu feature for pointer authentication

2019-06-03 Thread Dave Martin
On Fri, May 31, 2019 at 06:04:16PM +0100, Andre Przywara wrote:
> On Thu, 30 May 2019 16:13:10 +0100
> Dave Martin  wrote:
> 
> > From: Amit Daniel Kachhap 
> > 
> > This patch adds a runtime capabality for KVM tool to enable Arm64 8.3
> > Pointer Authentication in guest kernel. Two vcpu features
> > KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] are supplied together to enable
> > Pointer Authentication in KVM guest after checking the capability.
> > 
> > Command line options --enable-ptrauth and --disable-ptrauth are added
> > to use this feature. However, if those options are not provided then
> > also this feature is enabled if host supports this capability.
> 
> I don't really get the purpose of two options, I think that's quite
> confusing. Should the first one either be dropped at all or called
> something with "force"?
> 
> I guess the idea is to fail if pointer auth isn't available, but the
> option is supplied?
> 
> Or maybe have one option with parameters?
> --ptrauth[,=enable,=disable]

So, I was following two principles here, either or both of which may be
bogus:

1) There should be a way to determine whether KVM turns a given feature
on or off (instead of magically defaulting to something).

2) To a first approaximation, kvmtool should allow each major KVM ABI
feature to be exercised.

3) By default, kvmtool should offer the maximum feature set possible to
the guest.


(3) is well established, but (1) and (2) may be open to question?

If we hold to both principles, it makes sense to have options
functionally equivalent to what I suggested (where KVM provides the
control in the first place), but there may be more convenient ways
to respell the options.

If we really can't decide, maybe it's better to drop the options
altogether until we have a real use case.

I've found the options very useful for testing and debugging on the SVE
side, but I can't comment on ptrauth.  Maybe someone else has a view?

> > The macros defined in the headers are not in sync and should be replaced
> > from the upstream.
> 
> This is no longer true, I guess?

Ah yes, that comment can go.

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH kvmtool v3 8/9] arm64: Add SVE support

2019-06-03 Thread Dave Martin
On Fri, May 31, 2019 at 06:13:31PM +0100, Andre Przywara wrote:
> On Thu, 30 May 2019 16:13:13 +0100
> Dave Martin  wrote:
> 
> > This patch adds --enable-sve/--disable-sve command line options to
> > allow the user to control whether the Scalable Vector Extension is
> > made available to the guest.
> 
> I guess I have a similar concern about this enable/disable pair being
> confusing, though there is more sense here for SVE, given the impact of it
> being enabled in the guest.
> 
> Maybe we can cover both pointer auth and SVE options with the same revised
> approach?

I agree that we should follow the same approach for both when we've
decided what approach to take.

(That was part of the reason for pulling both into the same series -- I
didn't want to end up randomly doing two different things without a
conscious intention to do so.)

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH kvmtool v3 7/9] arm64: Make ptrauth enable/disable diagnostics more user-friendly

2019-06-03 Thread Dave Martin
On Fri, May 31, 2019 at 06:05:01PM +0100, Andre Przywara wrote:
> On Thu, 30 May 2019 16:13:12 +0100
> Dave Martin  wrote:
> 
> > To help the user understand what is going on, amend ptrauth
> > configuration diagnostic messages to refer to command line options
> > by the exact name used on the command line.
> > 
> > Also, provide a clean diagnostic when ptrauth is requested, but not
> > availble.  The generic "Unable to initialise vcpu" message is
> > rather cryptic for this case.
> 
> Again I don't see much value in having this as a separate patch, as it
> basically just touches code introduced two patches earlier. I think it
> should be merged into 5/9.

Same as with the previous patch, I though it was better to keep it
separate for review purposes for now, since it makes changes on top of
Amit's existing patch.

> > Since we now don't attempt to enable ptrauth at all unless KVM
> > reports the relevant capabilities, remove the error message for
> > that case too: in any case, we can't diagnose precisely why
> > KVM_ARM_VCPU_INIT failed, so the message may be misleading.
> 
> So this leaves the only point where we use .enable_ptrauth to that error
> message about the host not supporting it. Not sure if that's worth this
> separate option?

There is indeed a question to be resolved here.  See my response to the
penultimate patch.

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH kvmtool v3 4/9] update_headers: Sync kvm UAPI headers with linux v5.1-rc1

2019-06-03 Thread Dave Martin
On Fri, May 31, 2019 at 06:03:19PM +0100, Andre Przywara wrote:
> On Thu, 30 May 2019 16:13:09 +0100
> Dave Martin  wrote:
> 
> > Subject: [PATCH kvmtool v3 4/9] update_headers: Sync kvm UAPI headers with 
> > linux v5.1-rc1
> 
> This is actually v5.2-rc1, isn't it?

Doh.  Yes.  Amended.

> Apart from that:
> 
> > Pull in upstream UAPI headers, for subsequent arm64 SVE / ptrauth
> > support (among other things).
> > 
> > Signed-off-by: Dave Martin 
> 
> Reviewed-by: Andre Przywara 

Thanks
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH kvmtool v3 6/9] arm/arm64: Factor out ptrauth vcpu feature setup

2019-06-03 Thread Dave Martin
On Fri, May 31, 2019 at 06:04:36PM +0100, Andre Przywara wrote:
> On Thu, 30 May 2019 16:13:11 +0100
> Dave Martin  wrote:
> 
> > In the interest of readability, factor out the vcpu feature setup
> > for ptrauth into a separate function.
> > 
> > Also, because aarch32 doesn't have this feature or the related
> > command line options anyway, move the actual code into aarch64/.
> > 
> > Since ARM_VCPU_PTRAUTH_FEATURE is only there to make the ptrauth
> > feature setup code compile on arm, it is no longer needed: inline
> > and remove it.
> 
> I am not sure this is useful as a separate patch, so can we just merge
> this into 5/9?

Could be.  I wanted to keep the changes against Amit's original patch
clear for now, so it's easier for him to review.

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH kvmtool v3 3/9] update_headers.sh: arm64: Copy sve_context.h if available

2019-06-03 Thread Dave Martin
On Fri, May 31, 2019 at 06:03:40PM +0100, Andre Przywara wrote:
> On Thu, 30 May 2019 16:13:08 +0100
> Dave Martin  wrote:
> 
> > The SVE KVM support for arm64 includes the additional backend
> > header  from .
> > 
> > So update this header if it is available.
> > 
> > To avoid creating a sudden dependency on a specific minimum kernel
> > version, ignore the header if the source kernel tree doesn't have
> > it.
> > 
> > Signed-off-by: Dave Martin 
> > ---
> >  util/update_headers.sh | 13 -
> >  1 file changed, 12 insertions(+), 1 deletion(-)
> > 
> > diff --git a/util/update_headers.sh b/util/update_headers.sh
> > index a7e21b8..90d3ead 100755
> > --- a/util/update_headers.sh
> > +++ b/util/update_headers.sh
> > @@ -25,11 +25,22 @@ fi
> >  
> >  cp -- "$LINUX_ROOT/include/uapi/linux/kvm.h" include/linux
> >  
> > +unset KVMTOOL_PATH
> > +
> > +copy_arm64 () {
> > +   local src=$LINUX_ROOT/arch/$arch/include/uapi/asm/sve_context.h
> 
> To go with your previous patches, aren't you missing the quotes here?

Hmmm, good question.  This is "obviously" a fancy variable assignment,
and so there would be no word splitting after expansion.  So quotes
wouldn't be needed here, just as with a simple assignment.  bash and
ash seem to work this way.

dash doesn't though, and a padantic reading of the bash man page
suggests that the dash behaviour may be more correct: i.e., local
is just a command, whose arguments are expanded in the usual way,
even if it happens to assign variables as part of its behaviour.

So, while I'm not sure whether or not quotes are officially needed here,
I guess we should have them to be on the safe side.

> > +
> > +   if [ -e "$src" ]
> > +   then
> > +   cp -- "$src" "$KVMTOOL_PATH/include/asm"
> > +   fi
> > +}
> > +
> 
> Maybe we can make this slightly more generic?
> copy_optional_arch() {
>   local src="$LINUX_ROOT/arch/$arch/include/uapi/$1"
>   [ -r "$src" ] && cp -- "$src" "$KVMTOOL_PATH/include/asm"
> }
> ...
>   arm64) KVMTOOL_PATH=arm/aarch64
>  copy_optional_arch asm/sve_context.h
>  ;;

Happy to change it along those lines.  It's certainly possible this will
be needed again later for some future arch header.

Also, foo && bar exits the shell if foo yields false and set -e is in
effect, so I've reverted back to using an if.

(I'm still a little confused though, since I struggled to reproduce this
behaviour outside the script.)

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH kvmtool v3 2/9] update_headers.sh: Cleanly report failure on error

2019-06-03 Thread Dave Martin
On Fri, May 31, 2019 at 06:03:10PM +0100, Andre Przywara wrote:
> On Thu, 30 May 2019 16:13:07 +0100
> Dave Martin  wrote:
> 
> > If in intermediate step fails, update_headers.sh blindly continues
> > and may return success status.
> > 
> > To avoid errors going unnoticed when driving this script, exit and
> > report failure status as soon as something goes wrong.  For good
> > measure, also fail on expansion of undefined shell variables to aid
> > future maintainers.
> > 
> > Signed-off-by: Dave Martin 
> 
> Both "u" and "e" seem to be standard and work in dash and ash, so:
> 
> Reviewed-by: Andre Przywara 

Thanks.

Those options have been there forever, so I presume they are specified
by POSIX... but I confess I didn't check.

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH kvmtool v3 1/9] update_headers.sh: Add missing shell quoting

2019-06-03 Thread Dave Martin
On Fri, May 31, 2019 at 06:02:53PM +0100, Andre Przywara wrote:
> On Thu, 30 May 2019 16:13:06 +0100
> Dave Martin  wrote:
> 
> > update_headers.sh can break if the current working directory has a
> > funny name or if something odd is passed for LINUX_ROOT.
> 
> Do you actually have spaces in your Linux path? ;-)

No.  I'm assuming that people using a fancy desktop need to call it
"My Linux Kernel" in order to comprehend what it is though.

(Only joking!)

> > In the interest of cleanliness, quote where appropriate.
> > 
> > Signed-off-by: Dave Martin 
> 
> Looks alright to me:
> 
> Reviewed-by: Andre Przywara 

[...]

Thanks
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH kvmtool v3 8/9] arm64: Add SVE support

2019-05-30 Thread Dave Martin
This patch adds --enable-sve/--disable-sve command line options to
allow the user to control whether the Scalable Vector Extension is
made available to the guest.

This requires use of the new KVM_ARM_VCPU_FINALIZE ioctl before the
vcpu is runnable, so a new hook kvm_cpu__configure_features() is
added to provide an appropriate place to do this work.

By default, SVE is enabled for the guest if the host supports it.

Signed-off-by: Dave Martin 
---
 arm/aarch32/include/kvm/kvm-cpu-arch.h|  4 +++
 arm/aarch64/include/kvm/kvm-config-arch.h |  6 -
 arm/aarch64/include/kvm/kvm-cpu-arch.h|  1 +
 arm/aarch64/kvm-cpu.c | 41 +++
 arm/include/arm-common/kvm-config-arch.h  |  2 ++
 arm/kvm-cpu.c |  3 +++
 6 files changed, 56 insertions(+), 1 deletion(-)

diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h 
b/arm/aarch32/include/kvm/kvm-cpu-arch.h
index 01983f0..780e0e2 100644
--- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
@@ -15,5 +15,9 @@
 
 static inline void kvm_cpu__select_features(struct kvm *kvm,
struct kvm_vcpu_init *init) { }
+static inline int kvm_cpu__configure_features(struct kvm_cpu *vcpu)
+{
+   return 0;
+}
 
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h 
b/arm/aarch64/include/kvm/kvm-config-arch.h
index fe1699d..41e9d05 100644
--- a/arm/aarch64/include/kvm/kvm-config-arch.h
+++ b/arm/aarch64/include/kvm/kvm-config-arch.h
@@ -12,7 +12,11 @@
OPT_BOOLEAN('\0', "enable-ptrauth", &(cfg)->enable_ptrauth, \
"Enable pointer authentication for the guest"), \
OPT_BOOLEAN('\0', "disable-ptrauth", &(cfg)->disable_ptrauth,   \
-   "Disable pointer authentication for the guest"),
+   "Disable pointer authentication for the guest"), \
+   OPT_BOOLEAN('\0', "enable-sve", &(cfg)->enable_sve, \
+   "Enable SVE for the guest"),\
+   OPT_BOOLEAN('\0', "disable-sve", &(cfg)->disable_sve,   \
+   "Disable SVE for the guest"),
 
 #include "arm-common/kvm-config-arch.h"
 
diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h 
b/arm/aarch64/include/kvm/kvm-cpu-arch.h
index e6875fc..8dfb82e 100644
--- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
@@ -18,5 +18,6 @@
 #define ARM_CPU_CTRL_SCTLR_EL1 0
 
 void kvm_cpu__select_features(struct kvm *kvm, struct kvm_vcpu_init *init);
+int kvm_cpu__configure_features(struct kvm_cpu *vcpu);
 
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/aarch64/kvm-cpu.c b/arm/aarch64/kvm-cpu.c
index 08e4fd5..cdfb22e 100644
--- a/arm/aarch64/kvm-cpu.c
+++ b/arm/aarch64/kvm-cpu.c
@@ -152,9 +152,50 @@ static void select_ptrauth_feature(struct kvm *kvm, struct 
kvm_vcpu_init *init)
}
 }
 
+static void select_sve_feature(struct kvm *kvm, struct kvm_vcpu_init *init)
+{
+   bool supported;
+
+   if (kvm->cfg.arch.enable_sve && kvm->cfg.arch.disable_sve)
+   die("--enable-sve conflicts with --disable-sve");
+
+   supported = kvm__supports_extension(kvm, KVM_CAP_ARM_SVE);
+
+   if (kvm->cfg.arch.enable_sve && !supported)
+   die("--enable-sve not supported on this host");
+
+   /* Default SVE to on if available and not explicitly disabled */
+   if (supported && !kvm->cfg.arch.disable_sve) {
+   kvm->cfg.arch.enable_sve = true;
+   init->features[0] |= 1UL << KVM_ARM_VCPU_SVE;
+   }
+}
+
 void kvm_cpu__select_features(struct kvm *kvm, struct kvm_vcpu_init *init)
 {
select_ptrauth_feature(kvm, init);
+   select_sve_feature(kvm, init);
+}
+
+static int configure_sve(struct kvm_cpu *vcpu)
+{
+   int feature = KVM_ARM_VCPU_SVE;
+
+   if (ioctl(vcpu->vcpu_fd, KVM_ARM_VCPU_FINALIZE, )) {
+   pr_err("KVM_ARM_VCPU_FINALIZE: %s", strerror(errno));
+   return -1;
+   }
+
+   return 0;
+}
+
+int kvm_cpu__configure_features(struct kvm_cpu *vcpu)
+{
+   if (vcpu->kvm->cfg.arch.enable_sve)
+   if (configure_sve(vcpu))
+   return -1;
+
+   return 0;
 }
 
 void kvm_cpu__reset_vcpu(struct kvm_cpu *vcpu)
diff --git a/arm/include/arm-common/kvm-config-arch.h 
b/arm/include/arm-common/kvm-config-arch.h
index 1b4287d..40e3d1f 100644
--- a/arm/include/arm-common/kvm-config-arch.h
+++ b/arm/include/arm-common/kvm-config-arch.h
@@ -10,6 +10,8 @@ struct kvm_config_arch {
boolaarch32_guest;
boolhas_pmuv3;
u64 kaslr_seed;
+   boolenable_sve;
+  

[PATCH kvmtool v3 6/9] arm/arm64: Factor out ptrauth vcpu feature setup

2019-05-30 Thread Dave Martin
In the interest of readability, factor out the vcpu feature setup
for ptrauth into a separate function.

Also, because aarch32 doesn't have this feature or the related
command line options anyway, move the actual code into aarch64/.

Since ARM_VCPU_PTRAUTH_FEATURE is only there to make the ptrauth
feature setup code compile on arm, it is no longer needed: inline
and remove it.

Signed-off-by: Dave Martin 
---
 arm/aarch32/include/kvm/kvm-cpu-arch.h |  3 ++-
 arm/aarch64/include/kvm/kvm-cpu-arch.h |  3 +--
 arm/aarch64/kvm-cpu.c  | 22 ++
 arm/kvm-cpu.c  | 12 +---
 4 files changed, 26 insertions(+), 14 deletions(-)

diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h 
b/arm/aarch32/include/kvm/kvm-cpu-arch.h
index 3ec6f03..01983f0 100644
--- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
@@ -13,6 +13,7 @@
 #define ARM_CPU_ID 0, 0, 0
 #define ARM_CPU_ID_MPIDR   5
 
-#define ARM_VCPU_PTRAUTH_FEATURE   0
+static inline void kvm_cpu__select_features(struct kvm *kvm,
+   struct kvm_vcpu_init *init) { }
 
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h 
b/arm/aarch64/include/kvm/kvm-cpu-arch.h
index 9fa99fb..e6875fc 100644
--- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
@@ -17,7 +17,6 @@
 #define ARM_CPU_CTRL   3, 0, 1, 0
 #define ARM_CPU_CTRL_SCTLR_EL1 0
 
-#define ARM_VCPU_PTRAUTH_FEATURE   ((1UL << KVM_ARM_VCPU_PTRAUTH_ADDRESS) \
-   | (1UL << KVM_ARM_VCPU_PTRAUTH_GENERIC))
+void kvm_cpu__select_features(struct kvm *kvm, struct kvm_vcpu_init *init);
 
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/aarch64/kvm-cpu.c b/arm/aarch64/kvm-cpu.c
index 0aaefaf..d3c32e0 100644
--- a/arm/aarch64/kvm-cpu.c
+++ b/arm/aarch64/kvm-cpu.c
@@ -128,6 +128,28 @@ static void reset_vcpu_aarch64(struct kvm_cpu *vcpu)
}
 }
 
+static void select_ptrauth_feature(struct kvm *kvm, struct kvm_vcpu_init *init)
+{
+   /* Check Pointer Authentication command line arguments. */
+   if (kvm->cfg.arch.enable_ptrauth && kvm->cfg.arch.disable_ptrauth)
+   die("Both enable-ptrauth and disable-ptrauth option cannot be 
present");
+   /*
+* Always enable Pointer Authentication if system supports
+* this extension unless disable-ptrauth option is present.
+*/
+   if (kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_ADDRESS) &&
+   kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_GENERIC) &&
+   !kvm->cfg.arch.disable_ptrauth) {
+   init->features[0] |= 1UL << KVM_ARM_VCPU_PTRAUTH_ADDRESS;
+   init->features[0] |= 1UL << KVM_ARM_VCPU_PTRAUTH_GENERIC;
+   }
+}
+
+void kvm_cpu__select_features(struct kvm *kvm, struct kvm_vcpu_init *init)
+{
+   select_ptrauth_feature(kvm, init);
+}
+
 void kvm_cpu__reset_vcpu(struct kvm_cpu *vcpu)
 {
if (vcpu->kvm->cfg.arch.aarch32_guest)
diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
index acd1d5f..764fb05 100644
--- a/arm/kvm-cpu.c
+++ b/arm/kvm-cpu.c
@@ -68,17 +68,7 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned 
long cpu_id)
vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2);
}
 
-   /* Check Pointer Authentication command line arguments. */
-   if (kvm->cfg.arch.enable_ptrauth && kvm->cfg.arch.disable_ptrauth)
-   die("Both enable-ptrauth and disable-ptrauth option cannot be 
present");
-   /*
-* Always enable Pointer Authentication if system supports
-* this extension unless disable-ptrauth option is present.
-*/
-   if (kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_ADDRESS) &&
-   kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_GENERIC) &&
-   !kvm->cfg.arch.disable_ptrauth)
-   vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
+   kvm_cpu__select_features(kvm, _init);
 
/*
 * If the preferred target ioctl is successful then
-- 
2.1.4

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH kvmtool v3 9/9] arm64: Select SVE vector lengths via the command line

2019-05-30 Thread Dave Martin
In order to support use cases such as migration, it may be
important in some situations to restrict the set of SVE vector
lengths available to the guest.  It can also be useful to observe
the behaviour of guest OSes with different vector lengths.

To enable testing and experimentation for such configurations, this
patch adds a command-line option to allow setting of the set of
vector lengths to be made available to the guest.

For now, the setting is global: no means is offered to configure
individual guest vcpus independently of each other.

By default all vector lengths that the host can support are given
to the guest, as before.

Signed-off-by: Dave Martin 
---
 arm/aarch64/include/kvm/kvm-config-arch.h |  8 +++-
 arm/aarch64/kvm-cpu.c | 80 ++-
 arm/include/arm-common/kvm-config-arch.h  |  1 +
 3 files changed, 87 insertions(+), 2 deletions(-)

diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h 
b/arm/aarch64/include/kvm/kvm-config-arch.h
index 41e9d05..a996612 100644
--- a/arm/aarch64/include/kvm/kvm-config-arch.h
+++ b/arm/aarch64/include/kvm/kvm-config-arch.h
@@ -1,6 +1,8 @@
 #ifndef KVM__KVM_CONFIG_ARCH_H
 #define KVM__KVM_CONFIG_ARCH_H
 
+int sve_vls_parser(const struct option *opt, const char *arg, int unset);
+
 #define ARM_OPT_ARCH_RUN(cfg)  \
OPT_BOOLEAN('\0', "aarch32", &(cfg)->aarch32_guest, \
"Run AArch32 guest"),   \
@@ -16,7 +18,11 @@
OPT_BOOLEAN('\0', "enable-sve", &(cfg)->enable_sve, \
"Enable SVE for the guest"),\
OPT_BOOLEAN('\0', "disable-sve", &(cfg)->disable_sve,   \
-   "Disable SVE for the guest"),
+   "Disable SVE for the guest"),   \
+   OPT_CALLBACK('\0', "sve-vls", &(cfg)->sve_vqs,  \
+"comma-separated list of vector lengths, in 128-bit 
units", \
+"Set of vector lengths to enable for the guest",   \
+sve_vls_parser, NULL),
 
 #include "arm-common/kvm-config-arch.h"
 
diff --git a/arm/aarch64/kvm-cpu.c b/arm/aarch64/kvm-cpu.c
index cdfb22e..2c624c3 100644
--- a/arm/aarch64/kvm-cpu.c
+++ b/arm/aarch64/kvm-cpu.c
@@ -1,8 +1,13 @@
+#include 
+#include 
+#include 
+
 #include "kvm/kvm-cpu.h"
 #include "kvm/kvm.h"
 #include "kvm/virtio.h"
 
 #include 
+#include 
 
 #define COMPAT_PSR_F_BIT   0x0040
 #define COMPAT_PSR_I_BIT   0x0080
@@ -12,6 +17,65 @@
 #define SCTLR_EL1_E0E_MASK (1 << 24)
 #define SCTLR_EL1_EE_MASK  (1 << 25)
 
+/*
+ * Work around old kernel headers that lack these definitions in
+ * :
+ */
+#ifndef SVE_VQ_MIN
+#define SVE_VQ_MIN 1
+#endif
+
+#ifndef SVE_VQ_MAX
+#define SVE_VQ_MAX 512
+#endif
+
+int sve_vls_parser(const struct option *opt, const char *arg, int unset)
+{
+   size_t offset = 0;
+   int vq, n, t;
+   u64 (*vqs)[(SVE_VQ_MAX + 1 - SVE_VQ_MIN + 63) / 64];
+   u64 **cfg_vqs = opt->value;
+
+   if (*cfg_vqs) {
+   pr_err("sve-vls: SVE vector lengths set may only be specified 
once");
+   return -1;
+   }
+
+   vqs = calloc(1, sizeof *vqs);
+   if (!vqs)
+   die("%s", strerror(ENOMEM));
+
+   offset = 0;
+   while (arg[offset]) {
+   n = -1;
+
+   t = sscanf(arg + offset,
+  offset == 0 ? "%i%n" : ",%i%n",
+  , );
+   if (t == EOF || t < 1 || n <= 0) {
+   pr_err("sve-vls: Comma-separated list of vector lengths 
required");
+   goto error;
+   }
+
+   if (vq < SVE_VQ_MIN || vq > SVE_VQ_MAX) {
+   pr_err("sve-vls: Invalid vector length %d", vq);
+   goto error;
+   }
+
+   vq -= SVE_VQ_MIN;
+   (*vqs)[vq / 64] |= (u64)1 << (vq % 64);
+
+   offset += n;
+   }
+
+   *cfg_vqs = *vqs;
+   return 0;
+
+error:
+   free(vqs);
+   return -1;
+}
+
 static __u64 __core_reg_id(__u64 offset)
 {
__u64 id = KVM_REG_ARM64 | KVM_REG_ARM_CORE | offset;
@@ -180,6 +244,16 @@ void kvm_cpu__select_features(struct kvm *kvm, struct 
kvm_vcpu_init *init)
 static int configure_sve(struct kvm_cpu *vcpu)
 {
int feature = KVM_ARM_VCPU_SVE;
+   struct kvm_one_reg r = {
+   .id = KVM_REG_ARM64_SVE_VLS,
+   .addr = (u64)vcpu->kvm->cfg.arch.sve_vqs,
+   };
+
+   if (vcpu->kvm->cfg.arch.sve_vqs)
+   if (ioctl(vcpu->vcpu_fd, KVM

[PATCH kvmtool v3 7/9] arm64: Make ptrauth enable/disable diagnostics more user-friendly

2019-05-30 Thread Dave Martin
To help the user understand what is going on, amend ptrauth
configuration diagnostic messages to refer to command line options
by the exact name used on the command line.

Also, provide a clean diagnostic when ptrauth is requested, but not
availble.  The generic "Unable to initialise vcpu" message is
rather cryptic for this case.

Since we now don't attempt to enable ptrauth at all unless KVM
reports the relevant capabilities, remove the error message for
that case too: in any case, we can't diagnose precisely why
KVM_ARM_VCPU_INIT failed, so the message may be misleading.

Signed-off-by: Dave Martin 
---
 arm/aarch64/include/kvm/kvm-config-arch.h |  4 ++--
 arm/aarch64/kvm-cpu.c | 15 +++
 arm/kvm-cpu.c |  8 ++--
 3 files changed, 15 insertions(+), 12 deletions(-)

diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h 
b/arm/aarch64/include/kvm/kvm-config-arch.h
index 0279b13..fe1699d 100644
--- a/arm/aarch64/include/kvm/kvm-config-arch.h
+++ b/arm/aarch64/include/kvm/kvm-config-arch.h
@@ -10,9 +10,9 @@
"Specify random seed for Kernel Address Space " \
"Layout Randomization (KASLR)"),\
OPT_BOOLEAN('\0', "enable-ptrauth", &(cfg)->enable_ptrauth, \
-   "Enables pointer authentication"),  \
+   "Enable pointer authentication for the guest"), \
OPT_BOOLEAN('\0', "disable-ptrauth", &(cfg)->disable_ptrauth,   \
-   "Disables pointer authentication"),
+   "Disable pointer authentication for the guest"),
 
 #include "arm-common/kvm-config-arch.h"
 
diff --git a/arm/aarch64/kvm-cpu.c b/arm/aarch64/kvm-cpu.c
index d3c32e0..08e4fd5 100644
--- a/arm/aarch64/kvm-cpu.c
+++ b/arm/aarch64/kvm-cpu.c
@@ -130,16 +130,23 @@ static void reset_vcpu_aarch64(struct kvm_cpu *vcpu)
 
 static void select_ptrauth_feature(struct kvm *kvm, struct kvm_vcpu_init *init)
 {
+   bool supported;
+
/* Check Pointer Authentication command line arguments. */
if (kvm->cfg.arch.enable_ptrauth && kvm->cfg.arch.disable_ptrauth)
-   die("Both enable-ptrauth and disable-ptrauth option cannot be 
present");
+   die("--enable-ptrauth conflicts with --disable-ptrauth");
+
+   supported = kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_ADDRESS) &&
+   kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_GENERIC);
+
+   if (kvm->cfg.arch.enable_ptrauth && !supported)
+   die("--enable-ptrauth not supported on this host");
+
/*
 * Always enable Pointer Authentication if system supports
 * this extension unless disable-ptrauth option is present.
 */
-   if (kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_ADDRESS) &&
-   kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_GENERIC) &&
-   !kvm->cfg.arch.disable_ptrauth) {
+   if (supported && !kvm->cfg.arch.disable_ptrauth) {
init->features[0] |= 1UL << KVM_ARM_VCPU_PTRAUTH_ADDRESS;
init->features[0] |= 1UL << KVM_ARM_VCPU_PTRAUTH_GENERIC;
}
diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
index 764fb05..1652f6f 100644
--- a/arm/kvm-cpu.c
+++ b/arm/kvm-cpu.c
@@ -108,12 +108,8 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, 
unsigned long cpu_id)
die("Unable to find matching target");
}
 
-   if (err || target->init(vcpu)) {
-   if (kvm->cfg.arch.enable_ptrauth)
-   die("Unable to initialise vcpu with pointer 
authentication feature");
-   else
-   die("Unable to initialise vcpu");
-   }
+   if (err || target->init(vcpu))
+   die("Unable to initialise vcpu");
 
coalesced_offset = ioctl(kvm->sys_fd, KVM_CHECK_EXTENSION,
 KVM_CAP_COALESCED_MMIO);
-- 
2.1.4

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH kvmtool v3 5/9] KVM: arm/arm64: Add a vcpu feature for pointer authentication

2019-05-30 Thread Dave Martin
From: Amit Daniel Kachhap 

This patch adds a runtime capabality for KVM tool to enable Arm64 8.3
Pointer Authentication in guest kernel. Two vcpu features
KVM_ARM_VCPU_PTRAUTH_[ADDRESS/GENERIC] are supplied together to enable
Pointer Authentication in KVM guest after checking the capability.

Command line options --enable-ptrauth and --disable-ptrauth are added
to use this feature. However, if those options are not provided then
also this feature is enabled if host supports this capability.

The macros defined in the headers are not in sync and should be replaced
from the upstream.

Signed-off-by: Amit Daniel Kachhap 
Signed-off-by: Dave Martin  [merge new kernel heaers]
---
 arm/aarch32/include/kvm/kvm-cpu-arch.h|  2 ++
 arm/aarch64/include/kvm/kvm-config-arch.h |  6 +-
 arm/aarch64/include/kvm/kvm-cpu-arch.h|  3 +++
 arm/include/arm-common/kvm-config-arch.h  |  2 ++
 arm/kvm-cpu.c | 20 ++--
 5 files changed, 30 insertions(+), 3 deletions(-)

diff --git a/arm/aarch32/include/kvm/kvm-cpu-arch.h 
b/arm/aarch32/include/kvm/kvm-cpu-arch.h
index d28ea67..3ec6f03 100644
--- a/arm/aarch32/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch32/include/kvm/kvm-cpu-arch.h
@@ -13,4 +13,6 @@
 #define ARM_CPU_ID 0, 0, 0
 #define ARM_CPU_ID_MPIDR   5
 
+#define ARM_VCPU_PTRAUTH_FEATURE   0
+
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h 
b/arm/aarch64/include/kvm/kvm-config-arch.h
index 04be43d..0279b13 100644
--- a/arm/aarch64/include/kvm/kvm-config-arch.h
+++ b/arm/aarch64/include/kvm/kvm-config-arch.h
@@ -8,7 +8,11 @@
"Create PMUv3 device"), \
OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed, \
"Specify random seed for Kernel Address Space " \
-   "Layout Randomization (KASLR)"),
+   "Layout Randomization (KASLR)"),\
+   OPT_BOOLEAN('\0', "enable-ptrauth", &(cfg)->enable_ptrauth, \
+   "Enables pointer authentication"),  \
+   OPT_BOOLEAN('\0', "disable-ptrauth", &(cfg)->disable_ptrauth,   \
+   "Disables pointer authentication"),
 
 #include "arm-common/kvm-config-arch.h"
 
diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h 
b/arm/aarch64/include/kvm/kvm-cpu-arch.h
index a9d8563..9fa99fb 100644
--- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
@@ -17,4 +17,7 @@
 #define ARM_CPU_CTRL   3, 0, 1, 0
 #define ARM_CPU_CTRL_SCTLR_EL1 0
 
+#define ARM_VCPU_PTRAUTH_FEATURE   ((1UL << KVM_ARM_VCPU_PTRAUTH_ADDRESS) \
+   | (1UL << KVM_ARM_VCPU_PTRAUTH_GENERIC))
+
 #endif /* KVM__KVM_CPU_ARCH_H */
diff --git a/arm/include/arm-common/kvm-config-arch.h 
b/arm/include/arm-common/kvm-config-arch.h
index 5734c46..1b4287d 100644
--- a/arm/include/arm-common/kvm-config-arch.h
+++ b/arm/include/arm-common/kvm-config-arch.h
@@ -10,6 +10,8 @@ struct kvm_config_arch {
boolaarch32_guest;
boolhas_pmuv3;
u64 kaslr_seed;
+   boolenable_ptrauth;
+   booldisable_ptrauth;
enum irqchip_type irqchip;
u64 fw_addr;
 };
diff --git a/arm/kvm-cpu.c b/arm/kvm-cpu.c
index 7780251..acd1d5f 100644
--- a/arm/kvm-cpu.c
+++ b/arm/kvm-cpu.c
@@ -68,6 +68,18 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, unsigned 
long cpu_id)
vcpu_init.features[0] |= (1UL << KVM_ARM_VCPU_PSCI_0_2);
}
 
+   /* Check Pointer Authentication command line arguments. */
+   if (kvm->cfg.arch.enable_ptrauth && kvm->cfg.arch.disable_ptrauth)
+   die("Both enable-ptrauth and disable-ptrauth option cannot be 
present");
+   /*
+* Always enable Pointer Authentication if system supports
+* this extension unless disable-ptrauth option is present.
+*/
+   if (kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_ADDRESS) &&
+   kvm__supports_extension(kvm, KVM_CAP_ARM_PTRAUTH_GENERIC) &&
+   !kvm->cfg.arch.disable_ptrauth)
+   vcpu_init.features[0] |= ARM_VCPU_PTRAUTH_FEATURE;
+
/*
 * If the preferred target ioctl is successful then
 * use preferred target else try each and every target type
@@ -106,8 +118,12 @@ struct kvm_cpu *kvm_cpu__arch_init(struct kvm *kvm, 
unsigned long cpu_id)
die("Unable to find matching target");
}
 
-   if (err || target->init(vcpu))
-   die("Unable to initialise vcpu");
+   if (err || target->init(vcpu)) {
+   

[PATCH kvmtool v3 3/9] update_headers.sh: arm64: Copy sve_context.h if available

2019-05-30 Thread Dave Martin
The SVE KVM support for arm64 includes the additional backend
header  from .

So update this header if it is available.

To avoid creating a sudden dependency on a specific minimum kernel
version, ignore the header if the source kernel tree doesn't have
it.

Signed-off-by: Dave Martin 
---
 util/update_headers.sh | 13 -
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/util/update_headers.sh b/util/update_headers.sh
index a7e21b8..90d3ead 100755
--- a/util/update_headers.sh
+++ b/util/update_headers.sh
@@ -25,11 +25,22 @@ fi
 
 cp -- "$LINUX_ROOT/include/uapi/linux/kvm.h" include/linux
 
+unset KVMTOOL_PATH
+
+copy_arm64 () {
+   local src=$LINUX_ROOT/arch/$arch/include/uapi/asm/sve_context.h
+
+   if [ -e "$src" ]
+   then
+   cp -- "$src" "$KVMTOOL_PATH/include/asm"
+   fi
+}
+
 for arch in arm arm64 mips powerpc x86
 do
case "$arch" in
arm) KVMTOOL_PATH=arm/aarch32 ;;
-   arm64) KVMTOOL_PATH=arm/aarch64 ;;
+   arm64) KVMTOOL_PATH=arm/aarch64; copy_arm64 ;;
*) KVMTOOL_PATH=$arch ;;
esac
cp -- "$LINUX_ROOT/arch/$arch/include/uapi/asm/kvm.h" \
-- 
2.1.4

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH kvmtool v3 4/9] update_headers: Sync kvm UAPI headers with linux v5.1-rc1

2019-05-30 Thread Dave Martin
Pull in upstream UAPI headers, for subsequent arm64 SVE / ptrauth
support (among other things).

Signed-off-by: Dave Martin 
---
 arm/aarch64/include/asm/kvm.h | 43 
 arm/aarch64/include/asm/sve_context.h | 53 +++
 include/linux/kvm.h   | 15 --
 powerpc/include/asm/kvm.h | 48 +++
 x86/include/asm/kvm.h |  1 +
 5 files changed, 158 insertions(+), 2 deletions(-)
 create mode 100644 arm/aarch64/include/asm/sve_context.h

diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
index 97c3478..7b7ac0f 100644
--- a/arm/aarch64/include/asm/kvm.h
+++ b/arm/aarch64/include/asm/kvm.h
@@ -35,6 +35,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #define __KVM_HAVE_GUEST_DEBUG
 #define __KVM_HAVE_IRQ_LINE
@@ -102,6 +103,9 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_EL1_32BIT 1 /* CPU running a 32bit VM */
 #define KVM_ARM_VCPU_PSCI_0_2  2 /* CPU uses PSCI v0.2 */
 #define KVM_ARM_VCPU_PMU_V33 /* Support guest PMUv3 */
+#define KVM_ARM_VCPU_SVE   4 /* enable SVE for this CPU */
+#define KVM_ARM_VCPU_PTRAUTH_ADDRESS   5 /* VCPU uses address authentication */
+#define KVM_ARM_VCPU_PTRAUTH_GENERIC   6 /* VCPU uses generic authentication */
 
 struct kvm_vcpu_init {
__u32 target;
@@ -226,6 +230,45 @@ struct kvm_vcpu_events {
 KVM_REG_ARM_FW | ((r) & 0x))
 #define KVM_REG_ARM_PSCI_VERSION   KVM_REG_ARM_FW_REG(0)
 
+/* SVE registers */
+#define KVM_REG_ARM64_SVE  (0x15 << KVM_REG_ARM_COPROC_SHIFT)
+
+/* Z- and P-regs occupy blocks at the following offsets within this range: */
+#define KVM_REG_ARM64_SVE_ZREG_BASE0
+#define KVM_REG_ARM64_SVE_PREG_BASE0x400
+#define KVM_REG_ARM64_SVE_FFR_BASE 0x600
+
+#define KVM_ARM64_SVE_NUM_ZREGS__SVE_NUM_ZREGS
+#define KVM_ARM64_SVE_NUM_PREGS__SVE_NUM_PREGS
+
+#define KVM_ARM64_SVE_MAX_SLICES   32
+
+#define KVM_REG_ARM64_SVE_ZREG(n, i)   \
+   (KVM_REG_ARM64 | KVM_REG_ARM64_SVE | KVM_REG_ARM64_SVE_ZREG_BASE | \
+KVM_REG_SIZE_U2048 |   \
+(((n) & (KVM_ARM64_SVE_NUM_ZREGS - 1)) << 5) | \
+((i) & (KVM_ARM64_SVE_MAX_SLICES - 1)))
+
+#define KVM_REG_ARM64_SVE_PREG(n, i)   \
+   (KVM_REG_ARM64 | KVM_REG_ARM64_SVE | KVM_REG_ARM64_SVE_PREG_BASE | \
+KVM_REG_SIZE_U256 |\
+(((n) & (KVM_ARM64_SVE_NUM_PREGS - 1)) << 5) | \
+((i) & (KVM_ARM64_SVE_MAX_SLICES - 1)))
+
+#define KVM_REG_ARM64_SVE_FFR(i)   \
+   (KVM_REG_ARM64 | KVM_REG_ARM64_SVE | KVM_REG_ARM64_SVE_FFR_BASE | \
+KVM_REG_SIZE_U256 |\
+((i) & (KVM_ARM64_SVE_MAX_SLICES - 1)))
+
+#define KVM_ARM64_SVE_VQ_MIN __SVE_VQ_MIN
+#define KVM_ARM64_SVE_VQ_MAX __SVE_VQ_MAX
+
+/* Vector lengths pseudo-register: */
+#define KVM_REG_ARM64_SVE_VLS  (KVM_REG_ARM64 | KVM_REG_ARM64_SVE | \
+KVM_REG_SIZE_U512 | 0x)
+#define KVM_ARM64_SVE_VLS_WORDS\
+   ((KVM_ARM64_SVE_VQ_MAX - KVM_ARM64_SVE_VQ_MIN) / 64 + 1)
+
 /* Device Control API: ARM VGIC */
 #define KVM_DEV_ARM_VGIC_GRP_ADDR  0
 #define KVM_DEV_ARM_VGIC_GRP_DIST_REGS 1
diff --git a/arm/aarch64/include/asm/sve_context.h 
b/arm/aarch64/include/asm/sve_context.h
new file mode 100644
index 000..754ab75
--- /dev/null
+++ b/arm/aarch64/include/asm/sve_context.h
@@ -0,0 +1,53 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/* Copyright (C) 2017-2018 ARM Limited */
+
+/*
+ * For use by other UAPI headers only.
+ * Do not make direct use of header or its definitions.
+ */
+
+#ifndef _UAPI__ASM_SVE_CONTEXT_H
+#define _UAPI__ASM_SVE_CONTEXT_H
+
+#include 
+
+#define __SVE_VQ_BYTES 16  /* number of bytes per quadword */
+
+#define __SVE_VQ_MIN   1
+#define __SVE_VQ_MAX   512
+
+#define __SVE_VL_MIN   (__SVE_VQ_MIN * __SVE_VQ_BYTES)
+#define __SVE_VL_MAX   (__SVE_VQ_MAX * __SVE_VQ_BYTES)
+
+#define __SVE_NUM_ZREGS32
+#define __SVE_NUM_PREGS16
+
+#define __sve_vl_valid(vl) \
+   ((vl) % __SVE_VQ_BYTES == 0 &&  \
+(vl) >= __SVE_VL_MIN &&\
+(vl) <= __SVE_VL_MAX)
+
+#define __sve_vq_from_vl(vl)   ((vl) / __SVE_VQ_BYTES)
+#define __sve_vl_from_vq(vq)   ((vq) * __SVE_VQ_BYTES)
+
+#define __SVE_ZREG_SIZE(vq)((__u32)(vq) * __SVE_VQ_BYTES)
+#define __SVE_PREG_SIZE(vq)((__u32)(vq) * (__SVE_VQ_BYTES / 8))
+#define __SVE_FFR_SIZE(vq) __SVE_PREG_SIZE(vq)
+
+#

[PATCH kvmtool v3 2/9] update_headers.sh: Cleanly report failure on error

2019-05-30 Thread Dave Martin
If in intermediate step fails, update_headers.sh blindly continues
and may return success status.

To avoid errors going unnoticed when driving this script, exit and
report failure status as soon as something goes wrong.  For good
measure, also fail on expansion of undefined shell variables to aid
future maintainers.

Signed-off-by: Dave Martin 
---
 util/update_headers.sh | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/util/update_headers.sh b/util/update_headers.sh
index 4ba1b9f..a7e21b8 100755
--- a/util/update_headers.sh
+++ b/util/update_headers.sh
@@ -7,6 +7,8 @@
 # using the lib/modules/`uname -r`/source link.
 
 
+set -ue
+
 if [ "$#" -ge 1 ]
 then
LINUX_ROOT="$1"
-- 
2.1.4

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH kvmtool v3 1/9] update_headers.sh: Add missing shell quoting

2019-05-30 Thread Dave Martin
update_headers.sh can break if the current working directory has a
funny name or if something odd is passed for LINUX_ROOT.

In the interest of cleanliness, quote where appropriate.

Signed-off-by: Dave Martin 
---
 util/update_headers.sh | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/util/update_headers.sh b/util/update_headers.sh
index 2d93646..4ba1b9f 100755
--- a/util/update_headers.sh
+++ b/util/update_headers.sh
@@ -11,17 +11,17 @@ if [ "$#" -ge 1 ]
 then
LINUX_ROOT="$1"
 else
-   LINUX_ROOT=/lib/modules/$(uname -r)/source
+   LINUX_ROOT="/lib/modules/$(uname -r)/source"
 fi
 
-if [ ! -d $LINUX_ROOT/include/uapi/linux ]
+if [ ! -d "$LINUX_ROOT/include/uapi/linux" ]
 then
echo "$LINUX_ROOT does not seem to be valid Linux source tree."
echo "usage: $0 [path-to-Linux-source-tree]"
exit 1
 fi
 
-cp $LINUX_ROOT/include/uapi/linux/kvm.h include/linux
+cp -- "$LINUX_ROOT/include/uapi/linux/kvm.h" include/linux
 
 for arch in arm arm64 mips powerpc x86
 do
@@ -30,6 +30,6 @@ do
arm64) KVMTOOL_PATH=arm/aarch64 ;;
*) KVMTOOL_PATH=$arch ;;
esac
-   cp $LINUX_ROOT/arch/$arch/include/uapi/asm/kvm.h \
-   $KVMTOOL_PATH/include/asm
+   cp -- "$LINUX_ROOT/arch/$arch/include/uapi/asm/kvm.h" \
+   "$KVMTOOL_PATH/include/asm"
 done
-- 
2.1.4

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH kvmtool v3 0/9] arm64: Pointer Authentication and SVE support

2019-05-30 Thread Dave Martin
This series, based on kvmtool master [1], implements basic support for
pointer authentication and SVE for guests.

A git tree is also available [2].

For pointer auth, I include Amit's v10 patch [3], with some additional
refactoring to sit nicely alongside SVE, and some cosmetic / diagnostic
tidyups discussed during review on-list.  (I've kept the extra changes
separate for easier review, but they could be folded if desired.)

[Maintainer note: I'd like Amit to comment on my changes on top of his
pointer auth patch, but the first 4 patches just re-sync headers and
could be pulled earlier if you feel like it.]


This series has been tested against Linux v5.2-rc1.

If people have a strong view on the --sve-vls parameter, I'd be
interested to discuss what that should look like.  Since this is
primarily a debug/experimentation option, the current implementation is
probably good enough though.

[1] 
git://git.kernel.org/pub/scm/linux/kernel/git/will/kvmtool.git master
https://git.kernel.org/pub/scm/linux/kernel/git/will/kvmtool.git/log/
eaeaf60808d6 ("virtio/blk: Avoid taking pointer to packed struct")

[2] [PATCH v10 3/5] KVM: arm64: Add userspace flag to enable pointer 
authentication
https://lore.kernel.org/linux-arm-kernel/1555994558-26349-6-git-send-email-amit.kach...@arm.com/

[3]
git://linux-arm.org/kvmtool-dm.git sve/v3/head
http://linux-arm.org/git?p=kvmtool-dm.git;a=shortlog;h=refs/heads/sve/v3/head


Amit Daniel Kachhap (1):
  KVM: arm/arm64: Add a vcpu feature for pointer authentication

Dave Martin (8):
  update_headers.sh: Add missing shell quoting
  update_headers.sh: Cleanly report failure on error
  update_headers.sh: arm64: Copy sve_context.h if available
  update_headers: Sync kvm UAPI headers with linux v5.1-rc1
  arm/arm64: Factor out ptrauth vcpu feature setup
  arm64: Make ptrauth enable/disable diagnostics more user-friendly
  arm64: Add SVE support
  arm64: Select SVE vector lengths via the command line

 arm/aarch32/include/kvm/kvm-cpu-arch.h|   7 ++
 arm/aarch64/include/asm/kvm.h |  43 +
 arm/aarch64/include/asm/sve_context.h |  53 +++
 arm/aarch64/include/kvm/kvm-config-arch.h |  16 +++-
 arm/aarch64/include/kvm/kvm-cpu-arch.h|   3 +
 arm/aarch64/kvm-cpu.c | 148 ++
 arm/include/arm-common/kvm-config-arch.h  |   5 +
 arm/kvm-cpu.c |   5 +
 include/linux/kvm.h   |  15 ++-
 powerpc/include/asm/kvm.h |  48 ++
 util/update_headers.sh|  25 +++--
 x86/include/asm/kvm.h |   1 +
 12 files changed, 360 insertions(+), 9 deletions(-)
 create mode 100644 arm/aarch64/include/asm/sve_context.h

-- 
2.1.4

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [kvmtool PATCH v10 5/5] KVM: arm/arm64: Add a vcpu feature for pointer authentication

2019-05-28 Thread Dave Martin
On Tue, May 28, 2019 at 06:18:16PM +0530, Amit Daniel Kachhap wrote:
> Hi Dave,

[...]

> >Were you planning to repost this?
> >
> >Alternatively, I can fix up the diagnostic messages discussed here and
> >post it together with the SVE support.  I'll do that locally for now,
> >but let me know what you plan to do.  I'd like to get the SVE support
> >posted soon so that people can test it.
> 
> I will clean up the print messages as you suggested and repost it shortly.

OK, thanks.

In the meantime I'll rework the SVE config option stuff on what we
discussed.

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH] KVM: arm64: fix ptrauth ID register masking logic

2019-05-02 Thread Dave Martin
On Wed, May 01, 2019 at 05:20:49PM +0100, Marc Zyngier wrote:
> On 01/05/2019 17:10, Kristina Martsenko wrote:
> > When a VCPU doesn't have pointer auth, we want to hide all four pointer
> > auth ID register fields from the guest, not just one of them.
> > 
> > Fixes: 384b40caa8af ("KVM: arm/arm64: Context-switch ptrauth registers")
> > Reported-by: Andrew Murray 
> > Fsck-up-by: Marc Zyngier 
> > Signed-off-by: Kristina Martsenko 
> > ---
> >  arch/arm64/kvm/sys_regs.c | 8 
> >  1 file changed, 4 insertions(+), 4 deletions(-)
> > 
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 9d02643bc601..857b226bcdde 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -1088,10 +1088,10 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu,
> > if (id == SYS_ID_AA64PFR0_EL1 && !vcpu_has_sve(vcpu)) {
> > val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
> > } else if (id == SYS_ID_AA64ISAR1_EL1 && !vcpu_has_ptrauth(vcpu)) {
> > -   val &= ~(0xfUL << ID_AA64ISAR1_APA_SHIFT) |
> > -   (0xfUL << ID_AA64ISAR1_API_SHIFT) |
> > -   (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
> > -   (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
> > +   val &= ~((0xfUL << ID_AA64ISAR1_APA_SHIFT) |
> > +(0xfUL << ID_AA64ISAR1_API_SHIFT) |
> > +(0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
> > +(0xfUL << ID_AA64ISAR1_GPI_SHIFT));
> > }
> >  
> > return val;
> > 
> 
> Applied and pushed to -next. Thanks Andrew for reporting it, and
> Kristina for putting me right!

I was worried this was my mistake... but it looks like my original
suggstion did have the extra ().

Anyway, FWIW,

Acked-by: Dave Martin 
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v2 07/14] KVM: arm64/sve: Make register ioctl access errors more consistent

2019-04-25 Thread Dave Martin
On Thu, Apr 25, 2019 at 04:04:36PM +0100, Alex Bennée wrote:
> 
> Dave Martin  writes:
> 
> > On Thu, Apr 25, 2019 at 01:30:29PM +0100, Alex Bennée wrote:
> >>
> >> Dave Martin  writes:
> >>
> >> > Currently, the way error codes are generated when processing the
> >> > SVE register access ioctls in a bit haphazard.
> >> >
> >> > This patch refactors the code so that the behaviour is more
> >> > consistent: now, -EINVAL should be returned only for unrecognised
> >> > register IDs or when some other runtime error occurs.  -ENOENT is
> >> > returned for register IDs that are recognised, but whose
> >> > corresponding register (or slice) does not exist for the vcpu.
> >> >
> >> > To this end, in {get,set}_sve_reg() we now delegate the
> >> > vcpu_has_sve() check down into {get,set}_sve_vls() and
> >> > sve_reg_to_region().  The KVM_REG_ARM64_SVE_VLS special case is
> >> > picked off first, then sve_reg_to_region() plays the role of
> >> > exhaustively validating or rejecting the register ID and (where
> >> > accepted) computing the applicable register region as before.
> >> >
> >> > sve_reg_to_region() is rearranged so that -ENOENT or -EPERM is not
> >> > returned prematurely, before checking whether reg->id is in a
> >> > recognised range.
> >> >
> >> > -EPERM is now only returned when an attempt is made to access an
> >> > actually existing register slice on an unfinalized vcpu.
> >> >
> >> > Fixes: e1c9c98345b3 ("KVM: arm64/sve: Add SVE support to register access 
> >> > ioctl interface")
> >> > Fixes: 9033bba4b535 ("KVM: arm64/sve: Add pseudo-register for the 
> >> > guest's vector lengths")
> >> > Suggested-by: Andrew Jones 
> >> > Signed-off-by: Dave Martin 
> >> > Reviewed-by: Andrew Jones 
> >
> > [...]
> >
> >> > @@ -335,25 +344,30 @@ static int sve_reg_to_region(struct 
> >> > sve_state_reg_region *region,
> >> >  /* Verify that we match the UAPI header: */
> >> >  BUILD_BUG_ON(SVE_NUM_SLICES != KVM_ARM64_SVE_MAX_SLICES);
> >> >
> >> > -if ((reg->id & SVE_REG_SLICE_MASK) > 0)
> >> > -return -ENOENT;
> >> > -
> >> > -vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> >> > -
> >> >  reg_num = (reg->id & SVE_REG_ID_MASK) >> SVE_REG_ID_SHIFT;
> >> >
> >> >  if (reg->id >= zreg_id_min && reg->id <= zreg_id_max) {
> >> > +if (!vcpu_has_sve(vcpu) || (reg->id & 
> >> > SVE_REG_SLICE_MASK) > 0)
> >> > +return -ENOENT;
> >> > +
> >> > +vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> >> > +
> >> >  reqoffset = SVE_SIG_ZREG_OFFSET(vq, reg_num) -
> >> >  SVE_SIG_REGS_OFFSET;
> >> >  reqlen = KVM_SVE_ZREG_SIZE;
> >> >  maxlen = SVE_SIG_ZREG_SIZE(vq);
> >> >  } else if (reg->id >= preg_id_min && reg->id <= preg_id_max) {
> >> > +if (!vcpu_has_sve(vcpu) || (reg->id & 
> >> > SVE_REG_SLICE_MASK) > 0)
> >> > +return -ENOENT;
> >> > +
> >> > +vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> >> > +
> >>
> >> I suppose you could argue for a:
> >>
> >>if (reg->id >= zreg_id_min && reg->id <= preg_id_max) {
> >>if (!vcpu_has_sve(vcpu) || (reg->id & SVE_REG_SLICE_MASK) > 0)
> >>return -ENOENT;
> >>
> >>vq = sve_vq_from_vl(vcpu->arch.sve_max_vl);
> >>
> >> if (reg->id <= zreg_id_max) {
> >>reqoffset = SVE_SIG_ZREG_OFFSET(vq, reg_num) -
> >>SVE_SIG_REGS_OFFSET;
> >>reqlen = KVM_SVE_ZREG_SIZE;
> >>maxlen = SVE_SIG_ZREG_SIZE(vq);
> >> } else {
> >>reqoffset = SVE_SIG_PREG_OFFSET(vq, reg_num) -
> >>SVE_SIG_REGS_OFFSET;
> >>   

Re: [PATCH v7 13/27] KVM: arm64/sve: Context switch the SVE registers

2019-04-25 Thread Dave Martin
On Wed, Apr 24, 2019 at 03:51:32PM +0100, Alex Bennée wrote:
> 
> Dave Martin  writes:
> 
> > On Thu, Apr 04, 2019 at 10:35:02AM +0200, Andrew Jones wrote:
> >> On Thu, Apr 04, 2019 at 09:10:08AM +0100, Dave Martin wrote:
> >> > On Wed, Apr 03, 2019 at 10:01:45PM +0200, Andrew Jones wrote:
> >> > > On Fri, Mar 29, 2019 at 01:00:38PM +, Dave Martin wrote:
> >> > > > In order to give each vcpu its own view of the SVE registers, this
> >> > > > patch adds context storage via a new sve_state pointer in struct
> >> > > > vcpu_arch.  An additional member sve_max_vl is also added for each
> >> > > > vcpu, to determine the maximum vector length visible to the guest
> >> > > > and thus the value to be configured in ZCR_EL2.LEN while the vcpu
> >> > > > is active.  This also determines the layout and size of the storage
> >> > > > in sve_state, which is read and written by the same backend
> >> > > > functions that are used for context-switching the SVE state for
> >> > > > host tasks.
> >> > > >
> >> > > > On SVE-enabled vcpus, SVE access traps are now handled by switching
> >> > > > in the vcpu's SVE context and disabling the trap before returning
> >> > > > to the guest.  On other vcpus, the trap is not handled and an exit
> >> > > > back to the host occurs, where the handle_sve() fallback path
> >> > > > reflects an undefined instruction exception back to the guest,
> >> > > > consistently with the behaviour of non-SVE-capable hardware (as was
> >> > > > done unconditionally prior to this patch).
> >> > > >
> >> > > > No SVE handling is added on non-VHE-only paths, since VHE is an
> >> > > > architectural and Kconfig prerequisite of SVE.
> >> > > >
> >> > > > Signed-off-by: Dave Martin 
> >> > > > Reviewed-by: Julien Thierry 
> >> > > > Tested-by: zhang.lei 
> >> > > >
> >> > > > ---
> >> > > >
> >> > > > Changes since v5:
> >> > > >
> >> > > >  * [Julien Thierry, Julien Grall] Commit message typo fixes
> >> > > >
> >> > > >  * [Mark Rutland] Rename trap_class to hsr_ec, for consistency with
> >> > > >existing code.
> >> > > >
> >> > > >  * [Mark Rutland] Simplify condition for refusing to handle an
> >> > > >FPSIMD/SVE trap, using multiple if () statements for clarity.  The
> >> > > >previous condition was a bit tortuous, and how that the static_key
> >> > > >checks have been hoisted out, it makes little difference to the
> >> > > >compiler how we express the condition here.
> >> > > > ---
> >> > > >  arch/arm64/include/asm/kvm_host.h |  6 
> >> > > >  arch/arm64/kvm/fpsimd.c   |  5 +--
> >> > > >  arch/arm64/kvm/hyp/switch.c   | 75 
> >> > > > +--
> >> > > >  3 files changed, 66 insertions(+), 20 deletions(-)
> >> > > >
> >> > > > diff --git a/arch/arm64/include/asm/kvm_host.h 
> >> > > > b/arch/arm64/include/asm/kvm_host.h
> >> > > > index 22cf484..4fabfd2 100644
> >> > > > --- a/arch/arm64/include/asm/kvm_host.h
> >> > > > +++ b/arch/arm64/include/asm/kvm_host.h
> >> > > > @@ -228,6 +228,8 @@ struct vcpu_reset_state {
> >> > > >
> >> > > >  struct kvm_vcpu_arch {
> >> > > >  struct kvm_cpu_context ctxt;
> >> > > > +void *sve_state;
> >> > > > +unsigned int sve_max_vl;
> >> > > >
> >> > > >  /* HYP configuration */
> >> > > >  u64 hcr_el2;
> >> > > > @@ -323,6 +325,10 @@ struct kvm_vcpu_arch {
> >> > > >  bool sysregs_loaded_on_cpu;
> >> > > >  };
> >> > > >
> >> > > > +/* Pointer to the vcpu's SVE FFR for sve_{save,load}_state() */
> >> > > > +#define vcpu_sve_pffr(vcpu) ((void *)((char 
> >> > > > *)((vcpu)->arch.sve_state) + \
> >> > > > +  
> >> > > > sve_ffr_offset((vcpu)->arch.sve_max_vl)))
> &g

Re: [PATCH v7 12/27] KVM: arm64/sve: System register context switch and access support

2019-04-25 Thread Dave Martin
On Wed, Apr 24, 2019 at 04:21:22PM +0100, Alex Bennée wrote:
> 
> Dave Martin  writes:
> 
> > This patch adds the necessary support for context switching ZCR_EL1
> > for each vcpu.
> >
> > ZCR_EL1 is trapped alongside the FPSIMD/SVE registers, so it makes
> > sense for it to be handled as part of the guest FPSIMD/SVE context
> > for context switch purposes instead of handling it as a general
> > system register.  This means that it can be switched in lazily at
> > the appropriate time.  No effort is made to track host context for
> > this register, since SVE requires VHE: thus the hosts's value for
> > this register lives permanently in ZCR_EL2 and does not alias the
> > guest's value at any time.
> >
> > The Hyp switch and fpsimd context handling code is extended
> > appropriately.
> >
> > Accessors are added in sys_regs.c to expose the SVE system
> > registers and ID register fields.  Because these need to be
> > conditionally visible based on the guest configuration, they are
> > implemented separately for now rather than by use of the generic
> > system register helpers.  This may be abstracted better later on
> > when/if there are more features requiring this model.
> >
> > ID_AA64ZFR0_EL1 is RO-RAZ for MRS/MSR when SVE is disabled for the
> > guest, but for compatibility with non-SVE aware KVM implementations
> > the register should not be enumerated at all for KVM_GET_REG_LIST
> > in this case.  For consistency we also reject ioctl access to the
> > register.  This ensures that a non-SVE-enabled guest looks the same
> > to userspace, irrespective of whether the kernel KVM implementation
> > supports SVE.
> >
> > Signed-off-by: Dave Martin 
> > Reviewed-by: Julien Thierry 
> > Tested-by: zhang.lei 
> >
> > ---
> >

[...]

> > diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
> > index 1cf4f02..7053bf4 100644
> > --- a/arch/arm64/kvm/fpsimd.c
> > +++ b/arch/arm64/kvm/fpsimd.c
> > @@ -103,14 +103,21 @@ void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu)
> >  void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
> >  {
> > unsigned long flags;
> > +   bool host_has_sve = system_supports_sve();
> > +   bool guest_has_sve = vcpu_has_sve(vcpu);
> >
> > local_irq_save(flags);
> >
> > if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) {
> > +   u64 *guest_zcr = >arch.ctxt.sys_regs[ZCR_EL1];
> > +
> 
> Is this just to avoid:
> 
>vcpu->arch.ctxt.sys_regs[ZCR_EL1] = read_sysreg_s(SYS_ZCR_EL12);

No, it's just done to shorten the line.  Otherwise a trailing = is hard
to avoid (which Marc didn't like) or the line has to be over 80 chars
(which I didn't like).

> in fact wouldn't:
> 
>__vcpu_sys_reg(vcpu,ZCR_EL1) = read_sysreg_s(SYS_ZCR_EL12);

We could use __vcpu_sys_reg() yes, I missed that.

I could spin a patch for this, but it doesn't feel like a high priority
at this stage.

[...]

> Reviewed-by: Alex Bennée 

Thanks
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v2 00/14] KVM: arm64: SVE cleanups

2019-04-25 Thread Dave Martin
On Thu, Apr 25, 2019 at 01:35:56PM +0100, Alex Bennée wrote:
> 
> Dave Martin  writes:
> 
> > This series contains some cleanups applicable to the SVE KVM support
> > patches merged into kvmarm/next.  These arose from Andrew Jones'
> > review.
> >
> > Apart from some minor changes to error codes and checking, these are
> > mostly cosmetic / sytlistic changes only.
> >
> > The patches are based on
> > git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm.git next
> > 5d8d4af24460 ("arm64: KVM: Fix system register enumeration")
> >
> > This series in git:
> > git://linux-arm.org/linux-dm.git sve-kvm-fixes/v2/head
> > http://linux-arm.org/git?p=linux-dm.git;a=shortlog;h=refs/heads/sve-kvm-fixes/v2/head
> >
> > Tested with qemu and kvmtool on ThunderX2, and with kvmtool on the Arm
> > Fast model (to exercise SVE support).
> 
> These all look good to me:
> 
> Reviewed-by: Alex Bennée 

Thanks for the review!

Cheers
---Dave
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


  1   2   3   4   5   6   7   8   9   10   >