Re: [PATCH 00/14] KVM: arm64: SVE cleanups

2019-04-15 Thread Andrew Jones
On Fri, Apr 12, 2019 at 05:28:04PM +0100, Dave Martin wrote:
> This series contains some cleanups applicable to the SVE KVM support
> patches merged into kvmarm/next.  These arose from Andrew Jones'
> review.
> 
> Apart from some minor changes to error codes and checking, these are
> mostly cosmetic / sytlistic changes only.
> 
> Although straightforward, these changes have not been tested yet:
> I'm posting them now so that reviewers can make a start on them.
> 
> The patches are based on
> git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm.git next
> 5d8d4af24460 ("arm64: KVM: Fix system register enumeration")
> 
> This series in git:
> git://linux-arm.org/linux-dm.git sve-kvm-fixes/v1/head
> http://linux-arm.org/git?p=linux-dm.git;a=shortlog;h=refs/heads/sve-kvm-fixes/v1/head
> 
> Dave Martin (14):
>   arm64/sve: Clarify vq map semantics
>   KVM: arm/arm64: Demote kvm_arm_init_arch_resources() to just set up
> SVE
>   KVM: arm: Make vcpu finalization stubs into inline functions
>   KVM: arm64/sve: sys_regs: Demote redundant vcpu_has_sve() checks to
> WARNs
>   KVM: arm64/sve: Clean up UAPI register ID definitions
>   KVM: arm64/sve: Miscellaneous tidyups in guest.c
>   KVM: arm64/sve: Make register ioctl access errors more consistent
>   KVM: arm64/sve: WARN when avoiding divide-by-zero in
> sve_reg_to_region()
>   KVM: arm64/sve: Simplify KVM_REG_ARM64_SVE_VLS array sizing
>   KVM: arm64/sve: Explain validity checks in set_sve_vls()
>   KVM: arm/arm64: Clean up vcpu finalization function parameter naming
>   KVM: Clarify capability requirements for KVM_ARM_VCPU_FINALIZE
>   KVM: Clarify KVM_{SET,GET}_ONE_REG error code documentation
>   KVM: arm64: Clarify access behaviour for out-of-range SVE register
> slice IDs
> 
>  Documentation/virtual/kvm/api.txt | 24 ++-
>  arch/arm/include/asm/kvm_host.h   | 13 --
>  arch/arm64/include/asm/fpsimd.h   |  4 --
>  arch/arm64/include/asm/kvm_host.h |  4 +-
>  arch/arm64/include/uapi/asm/kvm.h | 32 ++
>  arch/arm64/kernel/fpsimd.c|  7 ++-
>  arch/arm64/kvm/guest.c| 91 
> +++
>  arch/arm64/kvm/reset.c|  6 +--
>  arch/arm64/kvm/sys_regs.c |  4 +-
>  virt/kvm/arm/arm.c|  2 +-
>  10 files changed, 116 insertions(+), 71 deletions(-)
> 
> -- 
> 2.1.4
>

For the series

Reviewed-by: Andrew Jones 
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 13/14] KVM: Clarify KVM_{SET,GET}_ONE_REG error code documentation

2019-04-15 Thread Andrew Jones
On Fri, Apr 12, 2019 at 05:28:17PM +0100, Dave Martin wrote:
> The current error code documentation for KVM_GET_ONE_REG and
> KVM_SET_ONE_REG could be read as implying that all architectures
> implement these error codes, or that KVM guararntees which error

guarantees

> code is returned in a particular situation.
> 
> Because this is not really the case, this patch waters down the
> documentation explicitly to remove such guarantees.
> 
> EPERM is marked as arm64-specific, since for now arm64 really is
> the only architecture that yields this error code for the
> finalization-required case.  Keeping this as a distinct error code
> is useful however for debugging due to the statefulness of the API
> in this instance.
> 
> No functional change.
> 
> Suggested-by: Andrew Jones 
> Fixes: 395f562f2b4c ("KVM: Document errors for KVM_GET_ONE_REG and 
> KVM_SET_ONE_REG")
> Fixes: 50036ad06b7f ("KVM: arm64/sve: Document KVM API extensions for SVE")
> Signed-off-by: Dave Martin 
> ---
>  Documentation/virtual/kvm/api.txt | 14 +-
>  1 file changed, 9 insertions(+), 5 deletions(-)
> 
> diff --git a/Documentation/virtual/kvm/api.txt 
> b/Documentation/virtual/kvm/api.txt
> index b115b23..74c51c7 100644
> --- a/Documentation/virtual/kvm/api.txt
> +++ b/Documentation/virtual/kvm/api.txt
> @@ -1873,8 +1873,10 @@ Parameters: struct kvm_one_reg (in)
>  Returns: 0 on success, negative value on failure
>  Errors:
>    ENOENT:   no such register
> -  EPERM:    register access forbidden for architecture-dependent reasons
> -  EINVAL:   other errors, such as bad size encoding for a known register
> +  EINVAL:   invalid register ID, or no such register
> +  EPERM:    (arm64) register access not allowed before vcpu finalization
> +(These error codes are indicative only: do not rely on a specific error
> +code being returned in a specific situation.)
>  
>  struct kvm_one_reg {
> __u64 id;
> @@ -2258,10 +2260,12 @@ Architectures: all
>  Type: vcpu ioctl
>  Parameters: struct kvm_one_reg (in and out)
>  Returns: 0 on success, negative value on failure
> -Errors:
> +Errors include:
>    ENOENT:   no such register
> -  EPERM:    register access forbidden for architecture-dependent reasons
> -  EINVAL:   other errors, such as bad size encoding for a known register
> +  EINVAL:   invalid register ID, or no such register
> +  EPERM:    (arm64) register access not allowed before vcpu finalization
> +(These error codes are indicative only: do not rely on a specific error
> +code being returned in a specific situation.)
>  
>  This ioctl allows to receive the value of a single register implemented
>  in a vcpu. The register to read is indicated by the "id" field of the
> -- 
> 2.1.4
>

Reviewed-by: Andrew Jones 
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 09/14] KVM: arm64/sve: Simplify KVM_REG_ARM64_SVE_VLS array sizing

2019-04-15 Thread Andrew Jones
On Fri, Apr 12, 2019 at 05:28:13PM +0100, Dave Martin wrote:
> A complicated DIV_ROUND_UP() expression is currently written out
> explicitly in multiple places in order to specify the size of the
> bitmap exchanged with userspace to represent the value of the
> KVM_REG_ARM64_SVE_VLS pseudo-register.
> 
> To make this more readable, this patch replaces these with a single
> define.
> 
> Since the number of words in a bitmap is just the index of the last
> word used + 1, this patch expresses the bound that way instead.
> This should make it clearer what is being expressed.
> 
> Since use of DIV_ROUND_UP() was the only reason for including
>  in guest.c, this patch removes that #include.
> 
> No functional change.
> 
> Suggested-by: Andrew Jones 
> Signed-off-by: Dave Martin 
> ---
>  arch/arm64/kvm/guest.c | 9 +
>  1 file changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
> index 73044e3..f025a2f 100644
> --- a/arch/arm64/kvm/guest.c
> +++ b/arch/arm64/kvm/guest.c
> @@ -23,7 +23,6 @@
>  #include 
>  #include 
>  #include 
> -#include 
>  #include 
>  #include 
>  #include 
> @@ -209,8 +208,10 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const 
> struct kvm_one_reg *reg)
>  #define vq_word(vq) (((vq) - SVE_VQ_MIN) / 64)
>  #define vq_mask(vq) ((u64)1 << ((vq) - SVE_VQ_MIN) % 64)
>  
> +#define SVE_VLS_WORDS (vq_word(SVE_VQ_MAX) + 1)
> +
>  static bool vq_present(
> - const u64 (*const vqs)[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)],
> + const u64 (*const vqs)[SVE_VLS_WORDS],
>   unsigned int vq)
>  {
>   return (*vqs)[vq_word(vq)] & vq_mask(vq);
> @@ -219,7 +220,7 @@ static bool vq_present(
>  static int get_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
>  {
>   unsigned int max_vq, vq;
> - u64 vqs[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)];
> + u64 vqs[SVE_VLS_WORDS];
>  
>   if (!vcpu_has_sve(vcpu))
>   return -ENOENT;
> @@ -243,7 +244,7 @@ static int get_sve_vls(struct kvm_vcpu *vcpu, const 
> struct kvm_one_reg *reg)
>  static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
>  {
>   unsigned int max_vq, vq;
> - u64 vqs[DIV_ROUND_UP(SVE_VQ_MAX - SVE_VQ_MIN + 1, 64)];
> + u64 vqs[SVE_VLS_WORDS];
>  
>   if (!vcpu_has_sve(vcpu))
>   return -ENOENT;
> -- 
> 2.1.4
>

This is good, but I wonder if we could define the number of VLS words in
the documentation in terms of SVE_VQ_MAX too. Currently it's just the
hard coded 8 ("__u64 vector_lengths[8]").

Reviewed-by: Andrew Jones 
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 05/14] KVM: arm64/sve: Clean up UAPI register ID definitions

2019-04-15 Thread Andrew Jones
On Fri, Apr 12, 2019 at 05:28:09PM +0100, Dave Martin wrote:
> Currently, the SVE register ID macros are not all defined in the
> same way, and advertise the fact that FFR maps onto the nonexistent
> predicate register P16.  This is really just for kernel
> convenience, and may lead userspace into bad habits.
> 
> Instead, this patch masks the ID macro arguments so that
> architecturally invalid register numbers will not be passed through
> any more, and uses a literal KVM_REG_ARM64_SVE_FFR_BASE macro to
> define KVM_REG_ARM64_SVE_FFR(), similarly to the way the _ZREG()
> and _PREG() macros are defined.
> 
> Rather than plugging in magic numbers for the number of Z- and P-
> registers and the maximum possible number of register slices, this
> patch provides definitions for those too.  Userspace is going to
> need them in any case, and it makes sense for them to come from
> .
> 
> sve_reg_to_region() uses convenience constants that are defined in
> a different way, and also makes use of the fact that the FFR IDs
> are really contiguous with the P15 IDs, so this patch retains the
> existing convenience constants in guest.c, supplemented with a
> couple of sanity checks to check for consistency with with the UAPI

s/with with/with/

> header.
> 
> Fixes: e1c9c98345b3 ("KVM: arm64/sve: Add SVE support to register access 
> ioctl interface")
> Suggested-by: Andrew Jones 
> Signed-off-by: Dave Martin 
> ---
>  arch/arm64/include/uapi/asm/kvm.h | 32 +++-
>  arch/arm64/kvm/guest.c|  9 +
>  2 files changed, 32 insertions(+), 9 deletions(-)
>

Reviewed-by: Andrew Jones 
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v4 6/6] KVM: arm/arm64: support chained PMU counters

2019-04-15 Thread Marc Zyngier
On 25/03/2019 18:30, Andrew Murray wrote:
> Emulate chained PMU counters by creating a single 64 bit event counter
> for a pair of chained KVM counters.
> 
> Please note that overflow interrupts are only supported on the high
> counter of a chained counter pair.
> 
> Signed-off-by: Andrew Murray 

Hi Andrew,

Sorry it's been a long time coming, but I finally got some bandwidth to 
have a look at this.

My main issue with the whole thing is that you create new abstractions 
that do not match the HW. Nowhere in the architecture there is the 
notion of "counter pair". You also duplicate some state in the sense 
that your new chain_event duplicates existing data structures (the 
perf_event pointer that exists in each and every PMC).

What I'm proposing is a slightly simpler approach that:

- tracks which "pair" of counters is actually chained
- for any counter, allow a "canonical" counter to be obtained: the low-
order counter if chained, and the counter itself otherwise
- the canonical counter is always holding the perf_event

Have a look at the patch below which applies on top of this series. I've 
only compile-tested it, so it is likely completely broken. Hopefully 
you'll see what I'm aiming for.

Thanks,

M.

diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index ce5f380a6699..8b434745500a 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -32,21 +32,10 @@ struct kvm_pmc {
u64 bitmask;
 };
 
-enum kvm_pmc_type {
-   KVM_PMC_TYPE_PAIR,
-   KVM_PMC_TYPE_CHAIN,
-};
-
-struct kvm_pmc_pair {
-   struct kvm_pmc low;
-   struct kvm_pmc high;
-   enum kvm_pmc_type type;
-   struct perf_event *chain_event;
-};
-
 struct kvm_pmu {
int irq_num;
-   struct kvm_pmc_pair pmc_pair[ARMV8_PMU_MAX_COUNTER_PAIRS];
+   struct kvm_pmc pmc[ARMV8_PMU_MAX_COUNTERS];
+   DECLARE_BITMAP(chained, ARMV8_PMU_MAX_COUNTER_PAIRS);
bool ready;
bool created;
bool irq_level;
diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
index 3a0f1e66c759..f3b86d1d401a 100644
--- a/virt/kvm/arm/pmu.c
+++ b/virt/kvm/arm/pmu.c
@@ -28,6 +28,28 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, 
u64 select_idx);
 
 #define PERF_ATTR_CFG1_KVM_PMU_CHAINED 0x1
 
+static struct kvm_vcpu *kvm_pmc_to_vcpu(struct kvm_pmc *pmc)
+{
+   struct kvm_pmu *pmu;
+   struct kvm_vcpu_arch *vcpu_arch;
+
+   pmc -= pmc->idx;
+   pmu = container_of(pmc, struct kvm_pmu, pmc[0]);
+   vcpu_arch = container_of(pmu, struct kvm_vcpu_arch, pmu);
+   return container_of(vcpu_arch, struct kvm_vcpu, arch);
+}
+
+/**
+ * kvm_pmu_pair_is_chained - determine if the pair is a chain
+ * @pmc: The PMU counter pointer
+ */
+static bool kvm_pmu_pair_is_chained(struct kvm_pmc *pmc)
+{
+   struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc);
+
+   return test_bit(pmc->idx >> 1, vcpu->arch.pmu.chained);
+}
+
 /**
  * kvm_pmu_pair_is_high_counter - determine if select_idx is a high/low counter
  * @select_idx: The counter index
@@ -37,16 +59,12 @@ static bool kvm_pmu_pair_is_high_counter(u64 select_idx)
return select_idx & 0x1;
 }
 
-/**
- * kvm_pmu_get_kvm_pmc_pair - obtain a pmc_pair from a pmc
- * @pmc: The PMU counter pointer
- */
-static struct kvm_pmc_pair *kvm_pmu_get_kvm_pmc_pair(struct kvm_pmc *pmc)
+static struct kvm_pmc *kvm_pmu_get_canonical_pmc(struct kvm_pmc *pmc)
 {
-   if (kvm_pmu_pair_is_high_counter(pmc->idx))
-   return container_of(pmc, struct kvm_pmc_pair, high);
-   else
-   return container_of(pmc, struct kvm_pmc_pair, low);
+   if (kvm_pmu_pair_is_chained(pmc) && (pmc->idx & 1))
+   return pmc - 1;
+
+   return pmc;
 }
 
 /**
@@ -58,21 +76,7 @@ static struct kvm_pmc *kvm_pmu_get_kvm_pmc(struct kvm_vcpu 
*vcpu,
   u64 select_idx)
 {
struct kvm_pmu *pmu = >arch.pmu;
-   struct kvm_pmc_pair *pmc_pair = >pmc_pair[select_idx >> 1];
-
-   return kvm_pmu_pair_is_high_counter(select_idx) ? _pair->high
-   : _pair->low;
-}
-
-/**
- * kvm_pmu_pair_is_chained - determine if the pair is a chain
- * @pmc: The PMU counter pointer
- */
-static bool kvm_pmu_pair_is_chained(struct kvm_pmc *pmc)
-{
-   struct kvm_pmc_pair *pmc_pair = kvm_pmu_get_kvm_pmc_pair(pmc);
-
-   return (pmc_pair->type == KVM_PMC_TYPE_CHAIN);
+   return >pmc[select_idx];
 }
 
 /**
@@ -104,12 +108,7 @@ static bool kvm_pmu_event_is_chained(struct kvm_vcpu 
*vcpu, u64 select_idx)
  */
 static struct perf_event *kvm_pmu_get_perf_event(struct kvm_pmc *pmc)
 {
-   struct kvm_pmc_pair *pmc_pair = kvm_pmu_get_kvm_pmc_pair(pmc);
-
-   if (kvm_pmu_pair_is_chained(pmc))
-   return pmc_pair->chain_event;
-
-   return pmc->perf_event;
+   return kvm_pmu_get_canonical_pmc(pmc)->perf_event;
 }
 
 /**
@@ -123,12 +122,7 @@ static struct perf_event *kvm_pmu_get_perf_event(struct 

Re: [PATCH v5 3/3] KVM: doc: add API documentation on the KVM_REG_ARM_WORKAROUNDS register

2019-04-15 Thread Steven Price
On 15/04/2019 12:15, Andre Przywara wrote:
> Add documentation for the newly defined firmware registers to save and
> restore any vulnerability mitigation status.
> 
> Signed-off-by: Andre Przywara 

Reviewed-by: Steven Price 

Thanks,

Steve

> ---
>  Documentation/virtual/kvm/arm/psci.txt | 31 ++
>  1 file changed, 31 insertions(+)
> 
> diff --git a/Documentation/virtual/kvm/arm/psci.txt 
> b/Documentation/virtual/kvm/arm/psci.txt
> index aafdab887b04..a876c1baa56e 100644
> --- a/Documentation/virtual/kvm/arm/psci.txt
> +++ b/Documentation/virtual/kvm/arm/psci.txt
> @@ -28,3 +28,34 @@ The following register is defined:
>- Allows any PSCI version implemented by KVM and compatible with
>  v0.2 to be set with SET_ONE_REG
>- Affects the whole VM (even if the register view is per-vcpu)
> +
> +* KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:
> +  Holds the state of the firmware support to mitigate CVE-2017-5715, as
> +  offered by KVM to the guest via a HVC call. The workaround is described
> +  under SMCCC_ARCH_WORKAROUND_1 in [1].
> +  Accepted values are:
> +KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL: KVM does not offer
> +  firmware support for the workaround. The mitigation status for the
> +  guest is unknown.
> +KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL: The workaround HVC call is
> +  available to the guest and required for the mitigation.
> +KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED: The workaround HVC call
> +  is available to the guest, but it is not needed on this VCPU.
> +
> +* KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2:
> +  Holds the state of the firmware support to mitigate CVE-2018-3639, as
> +  offered by KVM to the guest via a HVC call. The workaround is described
> +  under SMCCC_ARCH_WORKAROUND_2 in [1].
> +  Accepted values are:
> +KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL: A workaround is not
> +  available. KVM does not offer firmware support for the workaround.
> +KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN: The workaround state is
> +  unknown. KVM does not offer firmware support for the workaround.
> +KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL: The workaround is available,
> +  and can be disabled by a vCPU. If
> +  KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED is set, it is active for
> +  this vCPU.
> +KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNAFFECTED: The workaround is always
> +  active on this vCPU or it is not needed.
> +
> +[1] 
> https://developer.arm.com/-/media/developer/pdf/ARM_DEN_0070A_Firmware_interfaces_for_mitigating_CVE-2017-5715.pdf
> 

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v13 8/8] arm64: docs: document perf event attributes

2019-04-15 Thread Suzuki K Poulose




On 09/04/2019 20:22, Andrew Murray wrote:

The interaction between the exclude_{host,guest} flags,
exclude_{user,kernel,hv} flags and presence of VHE can result in
different exception levels being filtered by the ARMv8 PMU. As this
can be confusing let's document how they work on arm64.

Signed-off-by: Andrew Murray 
---


Thats really helpful ! Thanks for writing one  !

Reviewed-by: Suzuki K Poulose 

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v5 2/3] KVM: arm/arm64: Add save/restore support for firmware workaround state

2019-04-15 Thread Steven Price
On 15/04/2019 12:15, Andre Przywara wrote:
> KVM implements the firmware interface for mitigating cache speculation
> vulnerabilities. Guests may use this interface to ensure mitigation is
> active.
> If we want to migrate such a guest to a host with a different support
> level for those workarounds, migration might need to fail, to ensure that
> critical guests don't loose their protection.
> 
> Introduce a way for userland to save and restore the workarounds state.
> On restoring we do checks that make sure we don't downgrade our
> mitigation level.
> 
> Signed-off-by: Andre Przywara 

Reviewed-by: Steven Price 

Thanks,

Steve
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v5 1/3] arm64: KVM: Propagate full Spectre v2 workaround state to KVM guests

2019-04-15 Thread Steven Price
On 15/04/2019 12:15, Andre Przywara wrote:
> Recent commits added the explicit notion of "Not affected" to the state
> of the Spectre v2 (aka. BP_HARDENING) workaround, where we just had
> "needed" and "unknown" before.
> 
> Export this knowledge to the rest of the kernel and enhance the existing
> kvm_arm_harden_branch_predictor() to report this new state as well.
> Export this new state to guests when they use KVM's firmware interface
> emulation.
> 
> Signed-off-by: Andre Przywara 
> ---
>  arch/arm/include/asm/kvm_host.h | 12 +---
>  arch/arm64/include/asm/cpufeature.h |  6 ++
>  arch/arm64/include/asm/kvm_host.h   | 16 ++--
>  arch/arm64/kernel/cpu_errata.c  | 23 ++-
>  virt/kvm/arm/psci.c | 10 +-
>  5 files changed, 56 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index 770d73257ad9..836479e4b340 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -364,7 +364,11 @@ static inline void kvm_arch_vcpu_put_fp(struct kvm_vcpu 
> *vcpu) {}
>  static inline void kvm_arm_vhe_guest_enter(void) {}
>  static inline void kvm_arm_vhe_guest_exit(void) {}
>  
> -static inline bool kvm_arm_harden_branch_predictor(void)
> +#define KVM_BP_HARDEN_UNKNOWN-1
> +#define KVM_BP_HARDEN_NEEDED 0
> +#define KVM_BP_HARDEN_MITIGATED  1

I find the naming here a little confusing - it's not really clear what
"mitigated" means (see below).

> +
> +static inline int kvm_arm_harden_branch_predictor(void)
>  {
>   switch(read_cpuid_part()) {
>  #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
> @@ -372,10 +376,12 @@ static inline bool kvm_arm_harden_branch_predictor(void)
>   case ARM_CPU_PART_CORTEX_A12:
>   case ARM_CPU_PART_CORTEX_A15:
>   case ARM_CPU_PART_CORTEX_A17:
> - return true;
> + return KVM_BP_HARDEN_NEEDED;
>  #endif
> + case ARM_CPU_PART_CORTEX_A7:
> + return KVM_BP_HARDEN_MITIGATED;
>   default:
> - return false;
> + return KVM_BP_HARDEN_UNKNOWN;
>   }
>  }
>  
> diff --git a/arch/arm64/include/asm/cpufeature.h 
> b/arch/arm64/include/asm/cpufeature.h
> index 6ccdc97e5d6a..3c5b25c1bda1 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -622,6 +622,12 @@ static inline bool system_uses_irq_prio_masking(void)
>  cpus_have_const_cap(ARM64_HAS_IRQ_PRIO_MASKING);
>  }
>  
> +#define ARM64_BP_HARDEN_UNKNOWN  -1
> +#define ARM64_BP_HARDEN_NEEDED   0
> +#define ARM64_BP_HARDEN_MITIGATED1
> +
> +int get_spectre_v2_workaround_state(void);
> +
>  #define ARM64_SSBD_UNKNOWN   -1
>  #define ARM64_SSBD_FORCE_DISABLE 0
>  #define ARM64_SSBD_KERNEL1
> diff --git a/arch/arm64/include/asm/kvm_host.h 
> b/arch/arm64/include/asm/kvm_host.h
> index a01fe087e022..bf9a59b7d1ce 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -555,9 +555,21 @@ static inline void kvm_arm_vhe_guest_exit(void)
>   isb();
>  }
>  
> -static inline bool kvm_arm_harden_branch_predictor(void)
> +#define KVM_BP_HARDEN_UNKNOWN-1
> +#define KVM_BP_HARDEN_NEEDED 0
> +#define KVM_BP_HARDEN_MITIGATED  1
> +
> +static inline int kvm_arm_harden_branch_predictor(void)
>  {
> - return cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR);
> + switch (get_spectre_v2_workaround_state()) {
> + case ARM64_BP_HARDEN_NEEDED:
> + return KVM_BP_HARDEN_NEEDED;
> + case ARM64_BP_HARDEN_MITIGATED:
> + return KVM_BP_HARDEN_MITIGATED;
> + case ARM64_BP_HARDEN_UNKNOWN:
> + default:
> + return KVM_BP_HARDEN_UNKNOWN;
> + }
>  }
>  
>  #define KVM_SSBD_UNKNOWN -1
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index a1f3188c7be0..7fa23ab09d4e 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -555,6 +555,17 @@ cpu_enable_cache_maint_trap(const struct 
> arm64_cpu_capabilities *__unused)
>  static bool __hardenbp_enab = true;
>  static bool __spectrev2_safe = true;
>  
> +int get_spectre_v2_workaround_state(void)
> +{
> + if (__spectrev2_safe)
> + return ARM64_BP_HARDEN_MITIGATED;
> +
> + if (!__hardenbp_enab)
> + return ARM64_BP_HARDEN_UNKNOWN;
> +
> + return ARM64_BP_HARDEN_NEEDED;
> +}
> +
>  /*
>   * List of CPUs that do not need any Spectre-v2 mitigation at all.
>   */
> @@ -834,13 +845,15 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct 
> device_attribute *attr,
>  ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute 
> *attr,
>   char *buf)
>  {
> - if (__spectrev2_safe)
> + switch (get_spectre_v2_workaround_state()) {
> + case ARM64_BP_HARDEN_MITIGATED:
>

Re: [PATCH v13 7/8] arm64: KVM: avoid isb's by using direct pmxevtyper sysreg

2019-04-15 Thread Suzuki K Poulose




On 09/04/2019 20:22, Andrew Murray wrote:

Upon entering or exiting a guest we may modify multiple PMU counters to
enable of disable EL0 filtering. We presently do this via the indirect
PMXEVTYPER_EL0 system register (where the counter we modify is selected
by PMSELR). With this approach it is necessary to order the writes via
isb instructions such that we select the correct counter before modifying
it.

Let's avoid potentially expensive instruction barriers by using the
direct PMEVTYPER_EL0 registers instead.

As the change to counter type relates only to EL0 filtering we can rely
on the implicit instruction barrier which occurs when we transition from
EL2 to EL1 on entering the guest. On returning to userspace we can, at the
latest, rely on the implicit barrier between EL2 and EL0. We can also
depend on the explicit isb in armv8pmu_select_counter to order our write
against any other kernel changes by the PMU driver to the type register as
a result of preemption.

Signed-off-by: Andrew Murray 
---
  arch/arm64/kvm/pmu.c | 84 ++--
  1 file changed, 74 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/kvm/pmu.c b/arch/arm64/kvm/pmu.c
index 3407a529e116..4d55193ccc71 100644
--- a/arch/arm64/kvm/pmu.c
+++ b/arch/arm64/kvm/pmu.c
@@ -91,6 +91,74 @@ void __hyp_text __pmu_switch_to_host(struct kvm_cpu_context 
*host_ctxt)
write_sysreg(pmu->events_host, pmcntenset_el0);
  }
  
+#define PMEVTYPER_READ_CASE(idx)\

+   case idx:   \
+   return read_sysreg(pmevtyper##idx##_el0)
+
+#define PMEVTYPER_WRITE_CASE(idx)  \
+   case idx:   \
+   write_sysreg(val, pmevtyper##idx##_el0);\
+   break
+
+#define PMEVTYPER_CASES(readwrite) \
+   PMEVTYPER_##readwrite##_CASE(0);\
+   PMEVTYPER_##readwrite##_CASE(1);\
+   PMEVTYPER_##readwrite##_CASE(2);\
+   PMEVTYPER_##readwrite##_CASE(3);\
+   PMEVTYPER_##readwrite##_CASE(4);\
+   PMEVTYPER_##readwrite##_CASE(5);\
+   PMEVTYPER_##readwrite##_CASE(6);\
+   PMEVTYPER_##readwrite##_CASE(7);\
+   PMEVTYPER_##readwrite##_CASE(8);\
+   PMEVTYPER_##readwrite##_CASE(9);\
+   PMEVTYPER_##readwrite##_CASE(10);   \
+   PMEVTYPER_##readwrite##_CASE(11);   \
+   PMEVTYPER_##readwrite##_CASE(12);   \
+   PMEVTYPER_##readwrite##_CASE(13);   \
+   PMEVTYPER_##readwrite##_CASE(14);   \
+   PMEVTYPER_##readwrite##_CASE(15);   \
+   PMEVTYPER_##readwrite##_CASE(16);   \
+   PMEVTYPER_##readwrite##_CASE(17);   \
+   PMEVTYPER_##readwrite##_CASE(18);   \
+   PMEVTYPER_##readwrite##_CASE(19);   \
+   PMEVTYPER_##readwrite##_CASE(20);   \
+   PMEVTYPER_##readwrite##_CASE(21);   \
+   PMEVTYPER_##readwrite##_CASE(22);   \
+   PMEVTYPER_##readwrite##_CASE(23);   \
+   PMEVTYPER_##readwrite##_CASE(24);   \
+   PMEVTYPER_##readwrite##_CASE(25);   \
+   PMEVTYPER_##readwrite##_CASE(26);   \
+   PMEVTYPER_##readwrite##_CASE(27);   \
+   PMEVTYPER_##readwrite##_CASE(28);   \
+   PMEVTYPER_##readwrite##_CASE(29);   \
+   PMEVTYPER_##readwrite##_CASE(30)
+


Don't we need case 31 and deal with the PMCCFILTR, instead of WARN_ON(1)
below ?

Otherwise,

Reviewed-by: Suzuki K Poulose 
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v13 6/8] arm64: KVM: Enable VHE support for :G/:H perf event modifiers

2019-04-15 Thread Suzuki K Poulose




On 09/04/2019 20:22, Andrew Murray wrote:

With VHE different exception levels are used between the host (EL2) and
guest (EL1) with a shared exception level for userpace (EL0). We can take
advantage of this and use the PMU's exception level filtering to avoid
enabling/disabling counters in the world-switch code. Instead we just


s/Instead// ?


modify the counter type to include or exclude EL0 at vcpu_{load,put} time.





We also ensure that trapped PMU system register writes do not re-enable
EL0 when reconfiguring the backing perf events.

This approach completely avoids blackout windows seen with !VHE.

Suggested-by: Christoffer Dall 
Signed-off-by: Andrew Murray 
Acked-by: Will Deacon 





+/*
+ * On VHE ensure that only guest events have EL0 counting enabled
+ */
+void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu)
+{
+   struct kvm_cpu_context *host_ctxt;
+   struct kvm_host_data *host;
+   u32 events_guest, events_host;
+
+   if (!has_vhe())
+   return;
+
+   host_ctxt = vcpu->arch.host_cpu_context;
+   host = container_of(host_ctxt, struct kvm_host_data, host_ctxt);
+   events_guest = host->pmu_events.events_guest;
+   events_host = host->pmu_events.events_host;
+
+   kvm_vcpu_pmu_enable_el0(events_guest);
+   kvm_vcpu_pmu_disable_el0(events_host);
+}
+
+/*
+ * On VHE ensure that only guest host have EL0 counting enabled


nit: s/guest/host/host events/


+ */
+void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu)
+{
+   struct kvm_cpu_context *host_ctxt;
+   struct kvm_host_data *host;
+   u32 events_guest, events_host;
+
+   if (!has_vhe())
+   return;
+
+   host_ctxt = vcpu->arch.host_cpu_context;
+   host = container_of(host_ctxt, struct kvm_host_data, host_ctxt);
+   events_guest = host->pmu_events.events_guest;
+   events_host = host->pmu_events.events_host;
+
+   kvm_vcpu_pmu_enable_el0(events_host);
+   kvm_vcpu_pmu_disable_el0(events_guest);
+}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 539feecda5b8..c7fa47ad2387 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -695,6 +695,7 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct 
sys_reg_params *p,
val |= p->regval & ARMV8_PMU_PMCR_MASK;
__vcpu_sys_reg(vcpu, PMCR_EL0) = val;
kvm_pmu_handle_pmcr(vcpu, val);
+   kvm_vcpu_pmu_restore_guest(vcpu);


nit: I am not sure if we need to do this for PMCR accesses ? Unless
we have modified some changes to the events (e.g, like the two instances
below). Or am I missing something here ?

Otherwise:

Reviewed-by: Suzuki K Poulose 
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v13 5/8] arm64: KVM: Enable !VHE support for :G/:H perf event modifiers

2019-04-15 Thread Suzuki K Poulose




On 09/04/2019 20:22, Andrew Murray wrote:

Enable/disable event counters as appropriate when entering and exiting
the guest to enable support for guest or host only event counting.

For both VHE and non-VHE we switch the counters between host/guest at
EL2.

The PMU may be on when we change which counters are enabled however
we avoid adding an isb as we instead rely on existing context
synchronisation events: the eret to enter the guest (__guest_enter)
and eret in kvm_call_hyp for __kvm_vcpu_run_nvhe on returning.

Signed-off-by: Andrew Murray 


Reviewed-by: Suzuki K Poulose 
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v4 2/2] KVM: doc: add API documentation on the KVM_REG_ARM_WORKAROUNDS register

2019-04-15 Thread Andre Przywara
On Thu, 21 Mar 2019 13:33:18 +0100
Auger Eric  wrote:

Hi Eric,

many thanks for the feedback, turns out that this description here is
somewhat tricky. We only really care about the guest's view, but the
various host states (support configured out, disabled on the command line,
not supported by firmware, not needed on this CPU) play into this as well.
So in my understanding the information presented here does not necessarily
copy the information returned by the firmware interface, it just presents
what the guest can expect from the firmware interface.

I tried to reword the description in v5 to address any ambiguities,
meanwhile trying to answer your specific questions below.
Please have a look at v5 and tell me if that is clearer now.

> On 3/1/19 12:43 AM, Andre Przywara wrote:
> > Add documentation for the newly defined firmware registers to save and
> > restore any vulnerability migitation status.  
> s/migitation/mitigation
> > 
> > Signed-off-by: Andre Przywara 
> > ---
> >  Documentation/virtual/kvm/arm/psci.txt | 25 +
> >  1 file changed, 25 insertions(+)
> > 
> > diff --git a/Documentation/virtual/kvm/arm/psci.txt 
> > b/Documentation/virtual/kvm/arm/psci.txt
> > index aafdab887b04..1ed0f0515cd8 100644
> > --- a/Documentation/virtual/kvm/arm/psci.txt
> > +++ b/Documentation/virtual/kvm/arm/psci.txt
> > @@ -28,3 +28,28 @@ The following register is defined:
> >- Allows any PSCI version implemented by KVM and compatible with
> >  v0.2 to be set with SET_ONE_REG
> >- Affects the whole VM (even if the register view is per-vcpu)
> > +
> > +* KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:
> > +  Holds the state of the firmware controlled workaround to mitigate
> > +  CVE-2017-5715, as described under SMCCC_ARCH_WORKAROUND_1 in [1].> +  
> > Accepted values are:
> > +KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL: Workaround not 
> > available.
> > +  The mitigation status for the guest is unknown.
> > +KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL: The workaround HVC call is
> > +  available to the guest and required for the mitigation.  
> It is unclear to me why we don't report the actual enable status too as
> for WA2. Assuming the mitigation is not applied on the source (ie. AVAIL
> but SMCCC_ARCH_WORKAROUND_1 not called), do we need it on the destination?

In contrast to WORKAROUND_2 this one here is stateless. KVM potentially
offers the HVC call and says whether it's needed or not: What the guest
makes out of it, is up to the guest's discretion. The guest has to call
this HVC every time it wants to mitigate the issue. KVM needs to maintain
continuity on migration regarding offering this HVC call and its effect.

> > +KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED: The workaround HVC call
> > +  is available to the guest, but it is not needed on this VCPU.
> > +
> > +* KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2:
> > +  Holds the state of the firmware controlled workaround to mitigate
> > +  CVE-2018-3639, as described under SMCCC_ARCH_WORKAROUND_2 in [1].  
> This does not really match the terminology used in [1]. At least it is
> not straightforward to me.

Not really sure what you mean by this, exactly?

> > +  Accepted values are:
> > +KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL: Workaround not 
> > available.
> > +KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN: Workaround state unknown. 
> >  
> both included in NOT_SUPPORTED. By the way why did we need to be more
> precise here? Seems WA1 also has the UNKNOWN state as part of
> UNSUPPORTED. Why isn't it homogeneous?

Most differences originate in the statelessness of WORKAROUND_1. Also we
have this information here in the host kernel: NOT_AVAIL is returned when
the workaround has been explicitly disabled on the command line, UNKNOWN
is used when the workaround has been configured out.
We use this to avoid migration to a "worse place": UNKNOWN is never worse
than "not available".

> > +KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL: Workaround available, and 
> > can
> > +  be disabled by a vCPU. If 
> > KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED is
> > +  set, it is active for this vCPU.
> > +KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNAFFECTED: Workaround always 
> > active  
> NOT_REQUIRED or 1?

I think we don't care. KVM always offers this HVC call. I think the trick
here is that is not the firmware interface, but a description of it. It
just tells what the guest is able to call, we don't copy the results of
the firmware call in here.

Cheers,
Andre.


> > +  or not needed.
> > +
> > +[1] 
> > https://developer.arm.com/-/media/developer/pdf/ARM_DEN_0070A_Firmware_interfaces_for_mitigating_CVE-2017-5715.pdf
> >   
> Thanks
> 
> Eric

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v4 1/2] KVM: arm/arm64: Add save/restore support for firmware workaround state

2019-04-15 Thread Andre Przywara
On Thu, 21 Mar 2019 13:54:07 +0100
Auger Eric  wrote:

Hi Eric,

sorry, forgot to hit "Send" this morning ;-)

> On 3/1/19 12:43 AM, Andre Przywara wrote:
> > KVM implements the firmware interface for mitigating cache speculation
> > vulnerabilities. Guests may use this interface to ensure mitigation is
> > active.
> > If we want to migrate such a guest to a host with a different support
> > level for those workarounds, migration might need to fail, to ensure that
> > critical guests don't loose their protection.
> > 
> > Introduce a way for userland to save and restore the workarounds state.
> > On restoring we do checks that make sure we don't downgrade our
> > mitigation level.
> > 
> > Signed-off-by: Andre Przywara 
> > ---
> >  arch/arm/include/asm/kvm_emulate.h   |  10 +++
> >  arch/arm/include/uapi/asm/kvm.h  |  10 +++
> >  arch/arm64/include/asm/kvm_emulate.h |  14 +++
> >  arch/arm64/include/uapi/asm/kvm.h|   9 ++
> >  virt/kvm/arm/psci.c  | 128 +++
> >  5 files changed, 155 insertions(+), 16 deletions(-)
> > 
> > diff --git a/arch/arm/include/asm/kvm_emulate.h 
> > b/arch/arm/include/asm/kvm_emulate.h
> > index 8927cae7c966..663a02d7e6f4 100644
> > --- a/arch/arm/include/asm/kvm_emulate.h
> > +++ b/arch/arm/include/asm/kvm_emulate.h
> > @@ -283,6 +283,16 @@ static inline unsigned long 
> > kvm_vcpu_get_mpidr_aff(struct kvm_vcpu *vcpu)
> > return vcpu_cp15(vcpu, c0_MPIDR) & MPIDR_HWID_BITMASK;
> >  }
> >  
> > +static inline bool kvm_arm_get_vcpu_workaround_2_flag(struct kvm_vcpu 
> > *vcpu)
> > +{
> > +   return false;
> > +}
> > +
> > +static inline void kvm_arm_set_vcpu_workaround_2_flag(struct kvm_vcpu 
> > *vcpu,
> > + bool flag)
> > +{
> > +}
> > +
> >  static inline void kvm_vcpu_set_be(struct kvm_vcpu *vcpu)
> >  {
> > *vcpu_cpsr(vcpu) |= PSR_E_BIT;
> > diff --git a/arch/arm/include/uapi/asm/kvm.h 
> > b/arch/arm/include/uapi/asm/kvm.h
> > index 4602464ebdfb..ba4d2afe65e3 100644
> > --- a/arch/arm/include/uapi/asm/kvm.h
> > +++ b/arch/arm/include/uapi/asm/kvm.h
> > @@ -214,6 +214,16 @@ struct kvm_vcpu_events {
> >  #define KVM_REG_ARM_FW_REG(r)  (KVM_REG_ARM | KVM_REG_SIZE_U64 
> > | \
> >  KVM_REG_ARM_FW | ((r) & 0x))
> >  #define KVM_REG_ARM_PSCI_VERSION   KVM_REG_ARM_FW_REG(0)
> > +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1KVM_REG_ARM_FW_REG(1)
> > +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL  0
> > +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL  1
> > +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED 2
> > +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2KVM_REG_ARM_FW_REG(2)
> > +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL  0
> > +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN1  
> Would be worth adding a comment saying that values are chosen so that
> higher values mean better protection. Otherwise it looks strange
> NOT_AVAIL/AVAIL/UNAFFECTED values are not the same for both workarounds.  
> > +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL  2
> > +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNAFFECTED 3
> > +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED(1U << 4)  
> 
> >  
> >  /* Device Control API: ARM VGIC */
> >  #define KVM_DEV_ARM_VGIC_GRP_ADDR  0
> > diff --git a/arch/arm64/include/asm/kvm_emulate.h 
> > b/arch/arm64/include/asm/kvm_emulate.h
> > index d3842791e1c4..c00c17c9adb6 100644
> > --- a/arch/arm64/include/asm/kvm_emulate.h
> > +++ b/arch/arm64/include/asm/kvm_emulate.h
> > @@ -348,6 +348,20 @@ static inline unsigned long 
> > kvm_vcpu_get_mpidr_aff(struct kvm_vcpu *vcpu)
> > return vcpu_read_sys_reg(vcpu, MPIDR_EL1) & MPIDR_HWID_BITMASK;
> >  }
> >  
> > +static inline bool kvm_arm_get_vcpu_workaround_2_flag(struct kvm_vcpu 
> > *vcpu)
> > +{
> > +   return vcpu->arch.workaround_flags & VCPU_WORKAROUND_2_FLAG;
> > +}
> > +
> > +static inline void kvm_arm_set_vcpu_workaround_2_flag(struct kvm_vcpu 
> > *vcpu,
> > + bool flag)
> > +{
> > +   if (flag)
> > +   vcpu->arch.workaround_flags |= VCPU_WORKAROUND_2_FLAG;
> > +   else
> > +   vcpu->arch.workaround_flags &= ~VCPU_WORKAROUND_2_FLAG;
> > +}
> > +
> >  static inline void kvm_vcpu_set_be(struct kvm_vcpu *vcpu)
> >  {
> > if (vcpu_mode_is_32bit(vcpu)) {
> > diff --git a/arch/arm64/include/uapi/asm/kvm.h 
> > b/arch/arm64/include/uapi/asm/kvm.h
> > index 97c3478ee6e7..367e96fe654e 100644
> > --- a/arch/arm64/include/uapi/asm/kvm.h
> > +++ b/arch/arm64/include/uapi/asm/kvm.h
> > @@ -225,6 +225,15 @@ struct kvm_vcpu_events {
> >  #define KVM_REG_ARM_FW_REG(r)  (KVM_REG_ARM64 | 
> > KVM_REG_SIZE_U64 | \
> >  KVM_REG_ARM_FW | ((r) & 0x))
> >  #define KVM_REG_ARM_PSCI_VERSION   KVM_REG_ARM_FW_REG(0)
> > +#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1

[PATCH v5 1/3] arm64: KVM: Propagate full Spectre v2 workaround state to KVM guests

2019-04-15 Thread Andre Przywara
Recent commits added the explicit notion of "Not affected" to the state
of the Spectre v2 (aka. BP_HARDENING) workaround, where we just had
"needed" and "unknown" before.

Export this knowledge to the rest of the kernel and enhance the existing
kvm_arm_harden_branch_predictor() to report this new state as well.
Export this new state to guests when they use KVM's firmware interface
emulation.

Signed-off-by: Andre Przywara 
---
 arch/arm/include/asm/kvm_host.h | 12 +---
 arch/arm64/include/asm/cpufeature.h |  6 ++
 arch/arm64/include/asm/kvm_host.h   | 16 ++--
 arch/arm64/kernel/cpu_errata.c  | 23 ++-
 virt/kvm/arm/psci.c | 10 +-
 5 files changed, 56 insertions(+), 11 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 770d73257ad9..836479e4b340 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -364,7 +364,11 @@ static inline void kvm_arch_vcpu_put_fp(struct kvm_vcpu 
*vcpu) {}
 static inline void kvm_arm_vhe_guest_enter(void) {}
 static inline void kvm_arm_vhe_guest_exit(void) {}
 
-static inline bool kvm_arm_harden_branch_predictor(void)
+#define KVM_BP_HARDEN_UNKNOWN  -1
+#define KVM_BP_HARDEN_NEEDED   0
+#define KVM_BP_HARDEN_MITIGATED1
+
+static inline int kvm_arm_harden_branch_predictor(void)
 {
switch(read_cpuid_part()) {
 #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
@@ -372,10 +376,12 @@ static inline bool kvm_arm_harden_branch_predictor(void)
case ARM_CPU_PART_CORTEX_A12:
case ARM_CPU_PART_CORTEX_A15:
case ARM_CPU_PART_CORTEX_A17:
-   return true;
+   return KVM_BP_HARDEN_NEEDED;
 #endif
+   case ARM_CPU_PART_CORTEX_A7:
+   return KVM_BP_HARDEN_MITIGATED;
default:
-   return false;
+   return KVM_BP_HARDEN_UNKNOWN;
}
 }
 
diff --git a/arch/arm64/include/asm/cpufeature.h 
b/arch/arm64/include/asm/cpufeature.h
index 6ccdc97e5d6a..3c5b25c1bda1 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -622,6 +622,12 @@ static inline bool system_uses_irq_prio_masking(void)
   cpus_have_const_cap(ARM64_HAS_IRQ_PRIO_MASKING);
 }
 
+#define ARM64_BP_HARDEN_UNKNOWN-1
+#define ARM64_BP_HARDEN_NEEDED 0
+#define ARM64_BP_HARDEN_MITIGATED  1
+
+int get_spectre_v2_workaround_state(void);
+
 #define ARM64_SSBD_UNKNOWN -1
 #define ARM64_SSBD_FORCE_DISABLE   0
 #define ARM64_SSBD_KERNEL  1
diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index a01fe087e022..bf9a59b7d1ce 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -555,9 +555,21 @@ static inline void kvm_arm_vhe_guest_exit(void)
isb();
 }
 
-static inline bool kvm_arm_harden_branch_predictor(void)
+#define KVM_BP_HARDEN_UNKNOWN  -1
+#define KVM_BP_HARDEN_NEEDED   0
+#define KVM_BP_HARDEN_MITIGATED1
+
+static inline int kvm_arm_harden_branch_predictor(void)
 {
-   return cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR);
+   switch (get_spectre_v2_workaround_state()) {
+   case ARM64_BP_HARDEN_NEEDED:
+   return KVM_BP_HARDEN_NEEDED;
+   case ARM64_BP_HARDEN_MITIGATED:
+   return KVM_BP_HARDEN_MITIGATED;
+   case ARM64_BP_HARDEN_UNKNOWN:
+   default:
+   return KVM_BP_HARDEN_UNKNOWN;
+   }
 }
 
 #define KVM_SSBD_UNKNOWN   -1
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index a1f3188c7be0..7fa23ab09d4e 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -555,6 +555,17 @@ cpu_enable_cache_maint_trap(const struct 
arm64_cpu_capabilities *__unused)
 static bool __hardenbp_enab = true;
 static bool __spectrev2_safe = true;
 
+int get_spectre_v2_workaround_state(void)
+{
+   if (__spectrev2_safe)
+   return ARM64_BP_HARDEN_MITIGATED;
+
+   if (!__hardenbp_enab)
+   return ARM64_BP_HARDEN_UNKNOWN;
+
+   return ARM64_BP_HARDEN_NEEDED;
+}
+
 /*
  * List of CPUs that do not need any Spectre-v2 mitigation at all.
  */
@@ -834,13 +845,15 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct 
device_attribute *attr,
 ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
char *buf)
 {
-   if (__spectrev2_safe)
+   switch (get_spectre_v2_workaround_state()) {
+   case ARM64_BP_HARDEN_MITIGATED:
return sprintf(buf, "Not affected\n");
-
-   if (__hardenbp_enab)
+case ARM64_BP_HARDEN_NEEDED:
return sprintf(buf, "Mitigation: Branch predictor hardening\n");
-
-   return sprintf(buf, "Vulnerable\n");
+case ARM64_BP_HARDEN_UNKNOWN:
+   default:
+   return sprintf(buf, 

[PATCH v5 3/3] KVM: doc: add API documentation on the KVM_REG_ARM_WORKAROUNDS register

2019-04-15 Thread Andre Przywara
Add documentation for the newly defined firmware registers to save and
restore any vulnerability mitigation status.

Signed-off-by: Andre Przywara 
---
 Documentation/virtual/kvm/arm/psci.txt | 31 ++
 1 file changed, 31 insertions(+)

diff --git a/Documentation/virtual/kvm/arm/psci.txt 
b/Documentation/virtual/kvm/arm/psci.txt
index aafdab887b04..a876c1baa56e 100644
--- a/Documentation/virtual/kvm/arm/psci.txt
+++ b/Documentation/virtual/kvm/arm/psci.txt
@@ -28,3 +28,34 @@ The following register is defined:
   - Allows any PSCI version implemented by KVM and compatible with
 v0.2 to be set with SET_ONE_REG
   - Affects the whole VM (even if the register view is per-vcpu)
+
+* KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1:
+  Holds the state of the firmware support to mitigate CVE-2017-5715, as
+  offered by KVM to the guest via a HVC call. The workaround is described
+  under SMCCC_ARCH_WORKAROUND_1 in [1].
+  Accepted values are:
+KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL: KVM does not offer
+  firmware support for the workaround. The mitigation status for the
+  guest is unknown.
+KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL: The workaround HVC call is
+  available to the guest and required for the mitigation.
+KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED: The workaround HVC call
+  is available to the guest, but it is not needed on this VCPU.
+
+* KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2:
+  Holds the state of the firmware support to mitigate CVE-2018-3639, as
+  offered by KVM to the guest via a HVC call. The workaround is described
+  under SMCCC_ARCH_WORKAROUND_2 in [1].
+  Accepted values are:
+KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL: A workaround is not
+  available. KVM does not offer firmware support for the workaround.
+KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN: The workaround state is
+  unknown. KVM does not offer firmware support for the workaround.
+KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL: The workaround is available,
+  and can be disabled by a vCPU. If
+  KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED is set, it is active for
+  this vCPU.
+KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNAFFECTED: The workaround is always
+  active on this vCPU or it is not needed.
+
+[1] 
https://developer.arm.com/-/media/developer/pdf/ARM_DEN_0070A_Firmware_interfaces_for_mitigating_CVE-2017-5715.pdf
-- 
2.17.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v5 2/3] KVM: arm/arm64: Add save/restore support for firmware workaround state

2019-04-15 Thread Andre Przywara
KVM implements the firmware interface for mitigating cache speculation
vulnerabilities. Guests may use this interface to ensure mitigation is
active.
If we want to migrate such a guest to a host with a different support
level for those workarounds, migration might need to fail, to ensure that
critical guests don't loose their protection.

Introduce a way for userland to save and restore the workarounds state.
On restoring we do checks that make sure we don't downgrade our
mitigation level.

Signed-off-by: Andre Przywara 
---
 arch/arm/include/asm/kvm_emulate.h   |  10 ++
 arch/arm/include/uapi/asm/kvm.h  |  12 +++
 arch/arm64/include/asm/kvm_emulate.h |  14 +++
 arch/arm64/include/uapi/asm/kvm.h|  10 ++
 virt/kvm/arm/psci.c  | 139 ---
 5 files changed, 170 insertions(+), 15 deletions(-)

diff --git a/arch/arm/include/asm/kvm_emulate.h 
b/arch/arm/include/asm/kvm_emulate.h
index 8927cae7c966..663a02d7e6f4 100644
--- a/arch/arm/include/asm/kvm_emulate.h
+++ b/arch/arm/include/asm/kvm_emulate.h
@@ -283,6 +283,16 @@ static inline unsigned long kvm_vcpu_get_mpidr_aff(struct 
kvm_vcpu *vcpu)
return vcpu_cp15(vcpu, c0_MPIDR) & MPIDR_HWID_BITMASK;
 }
 
+static inline bool kvm_arm_get_vcpu_workaround_2_flag(struct kvm_vcpu *vcpu)
+{
+   return false;
+}
+
+static inline void kvm_arm_set_vcpu_workaround_2_flag(struct kvm_vcpu *vcpu,
+ bool flag)
+{
+}
+
 static inline void kvm_vcpu_set_be(struct kvm_vcpu *vcpu)
 {
*vcpu_cpsr(vcpu) |= PSR_E_BIT;
diff --git a/arch/arm/include/uapi/asm/kvm.h b/arch/arm/include/uapi/asm/kvm.h
index 4602464ebdfb..dfce6fdbd03b 100644
--- a/arch/arm/include/uapi/asm/kvm.h
+++ b/arch/arm/include/uapi/asm/kvm.h
@@ -214,6 +214,18 @@ struct kvm_vcpu_events {
 #define KVM_REG_ARM_FW_REG(r)  (KVM_REG_ARM | KVM_REG_SIZE_U64 | \
 KVM_REG_ARM_FW | ((r) & 0x))
 #define KVM_REG_ARM_PSCI_VERSION   KVM_REG_ARM_FW_REG(0)
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1KVM_REG_ARM_FW_REG(1)
+   /* Higher values mean better protection. */
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL  0
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL  1
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED 2
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2KVM_REG_ARM_FW_REG(2)
+   /* Higher values mean better protection. */
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL  0
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN1
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL  2
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNAFFECTED 3
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED(1U << 4)
 
 /* Device Control API: ARM VGIC */
 #define KVM_DEV_ARM_VGIC_GRP_ADDR  0
diff --git a/arch/arm64/include/asm/kvm_emulate.h 
b/arch/arm64/include/asm/kvm_emulate.h
index d3842791e1c4..c00c17c9adb6 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -348,6 +348,20 @@ static inline unsigned long kvm_vcpu_get_mpidr_aff(struct 
kvm_vcpu *vcpu)
return vcpu_read_sys_reg(vcpu, MPIDR_EL1) & MPIDR_HWID_BITMASK;
 }
 
+static inline bool kvm_arm_get_vcpu_workaround_2_flag(struct kvm_vcpu *vcpu)
+{
+   return vcpu->arch.workaround_flags & VCPU_WORKAROUND_2_FLAG;
+}
+
+static inline void kvm_arm_set_vcpu_workaround_2_flag(struct kvm_vcpu *vcpu,
+ bool flag)
+{
+   if (flag)
+   vcpu->arch.workaround_flags |= VCPU_WORKAROUND_2_FLAG;
+   else
+   vcpu->arch.workaround_flags &= ~VCPU_WORKAROUND_2_FLAG;
+}
+
 static inline void kvm_vcpu_set_be(struct kvm_vcpu *vcpu)
 {
if (vcpu_mode_is_32bit(vcpu)) {
diff --git a/arch/arm64/include/uapi/asm/kvm.h 
b/arch/arm64/include/uapi/asm/kvm.h
index 97c3478ee6e7..e75a81a7716f 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -225,6 +225,16 @@ struct kvm_vcpu_events {
 #define KVM_REG_ARM_FW_REG(r)  (KVM_REG_ARM64 | KVM_REG_SIZE_U64 | \
 KVM_REG_ARM_FW | ((r) & 0x))
 #define KVM_REG_ARM_PSCI_VERSION   KVM_REG_ARM_FW_REG(0)
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1KVM_REG_ARM_FW_REG(1)
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_NOT_AVAIL  0
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL  1
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED 2
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2KVM_REG_ARM_FW_REG(2)
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_NOT_AVAIL  0
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNKNOWN1
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_AVAIL  2
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_UNAFFECTED 3
+#define KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_2_ENABLED (1U << 4)
 
 /* Device Control API: ARM VGIC */
 #define KVM_DEV_ARM_VGIC_GRP_ADDR  0
diff --git a/virt/kvm/arm/psci.c 

[PATCH v5 0/3] KVM: arm/arm64: Add VCPU workarounds firmware register

2019-04-15 Thread Andre Przywara
Hi,

another update, mostly basing upon Jeremy's sysfs vulnerabilites
series[1], which adds the new UNAFFECTED state for WORKAROUND_1.
The new patch 1/3 propagates this state to the guest, so a guest benefits
from the host kernel's knowledge about whether this workaround is needed.
This triggers some changes in patch 2/3, where we properly export this
new state to userland as well, so we can deny migration from NOT_NEEDED to
AVAILABLE, and allow the other way round.
The documentation in patch 3/3 has been reworded to be more precise, so
I dropped Steven's R-b: on this.

Cheers,
Andre

-
Workarounds for Spectre variant 2 or 4 vulnerabilities require some help
from the firmware, so KVM implements an interface to provide that for
guests. When such a guest is migrated, we want to make sure we don't
loose the protection the guest relies on.

This introduces two new firmware registers in KVM's GET/SET_ONE_REG
interface, so userland can save the level of protection implemented by
the hypervisor and used by the guest. Upon restoring these registers,
we make sure we don't downgrade and reject any values that would mean
weaker protection.
The protection level is encoded in the lower 4 bits, with smaller
values indicating weaker protection.

ARM(32) is a bit of a pain (again), as the firmware register interface
is shared, but 32-bit does not implement all the workarounds.
For now I stuffed two wrappers into kvm_emulate.h, which doesn't sound
like the best solution. Happy to hear about better solutions.

This has been tested with migration between two Juno systems. Out of the
box they advertise identical workaround levels, and migration succeeds.
However when disabling the A57 cluster on one system, WORKAROUND_1 is
not needed and the host kernel propagates this. Migration now only
succeeds in one direction (from the big/LITTLE configuration to the
A53-only setup).

Please have a look and comment!

This is based upon v7 of the "add system vulnerability sysfs entries",
applied on top of 5.1-rc4. Let me know if you want to have a different
base.

Cheers,
Andre

[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2019-April/645382.html

Changelog:
v4 .. v5:
- add patch to advertise ARM64_BP_HARDEN_MITIGATED state to a guest
- allow migration from KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_AVAIL to
  (new) KVM_REG_ARM_SMCCC_ARCH_WORKAROUND_1_UNAFFECTED
- reword API documentation
- return -EINVAL on querying invalid firmware register
- add some comments
- minor fixes according to Eric's review

v3 .. v4:
- clarify API documentation for WORKAROUND_1
- check for unknown bits in userland provided register values
- report proper -ENOENT when register ID is unknown

v2 .. v3:
- rebase against latest kvm-arm/next
- introduce UNAFFECTED value for WORKAROUND_1
- require exact match for WORKAROUND_1 levels

v1 .. v2:
- complete rework of WORKAROUND_2 presentation to use a linear scale,
  dropping the complicated comparison routine

Andre Przywara (3):
  arm64: KVM: Propagate full Spectre v2 workaround state to KVM guests
  KVM: arm/arm64: Add save/restore support for firmware workaround state
  KVM: doc: add API documentation on the KVM_REG_ARM_WORKAROUNDS
register

 Documentation/virtual/kvm/arm/psci.txt |  31 +
 arch/arm/include/asm/kvm_emulate.h |  10 ++
 arch/arm/include/asm/kvm_host.h|  12 +-
 arch/arm/include/uapi/asm/kvm.h|  12 ++
 arch/arm64/include/asm/cpufeature.h|   6 +
 arch/arm64/include/asm/kvm_emulate.h   |  14 +++
 arch/arm64/include/asm/kvm_host.h  |  16 ++-
 arch/arm64/include/uapi/asm/kvm.h  |  10 ++
 arch/arm64/kernel/cpu_errata.c |  23 +++-
 virt/kvm/arm/psci.c| 149 ++---
 10 files changed, 257 insertions(+), 26 deletions(-)

-- 
2.17.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm