Re: [PATCH v2] kvm: Replace vcpu->swait with rcuwait

2020-04-21 Thread Paolo Bonzini
On 21/04/20 20:07, Davidlohr Bueso wrote:
>> 
> 
> I should have looked closer here - I was thinking about the return
> value of rcuwait_wait_event. Yes, that signal_pending check you
> mention makes the sleep semantics change bogus as interruptible is no
> longer just to avoid contributing to the load balance.
> 
> And yes, unfortunately adding prepare_to and finish_rcuwait() looks
> like the most reasonable approach to keeping the tracepoint
> semantics. I also considered extending rcuwait_wait_event() by
> another parameter to pass back to the caller if there was any wait at
> all, but that enlarges the call and is probably less generic.

Yes, at some point the usual prepare_to/finish APIs become simpler.

> I'll send another version keeping the current sleep and tracepoint 
> semantics.

Thanks---and sorry, I should have noticed that way earlier.

Paolo

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v2] kvm: Replace vcpu->swait with rcuwait

2020-04-21 Thread Davidlohr Bueso

On Tue, 21 Apr 2020, Paolo Bonzini wrote:


On 20/04/20 23:50, Davidlohr Bueso wrote:

On Mon, 20 Apr 2020, Paolo Bonzini wrote:


On 20/04/20 22:56, Davidlohr Bueso wrote:

On Mon, 20 Apr 2020, Marc Zyngier wrote:


This looks like a change in the semantics of the tracepoint. Before
this
change, 'waited' would have been true if the vcpu waited at all. Here,
you'd
have false if it has been interrupted by a signal, even if the vcpu
has waited
for a period of time.


Hmm but sleeps are now uninterruptible as we're using TASK_IDLE.


Hold on, does that mean that you can't anymore send a signal in order to
kick a thread out of KVM_RUN?  Or am I just misunderstanding?


Considering that the return value of the interruptible wait is not
checked, I would not think this breaks KVM_RUN.


What return value?  kvm_vcpu_check_block checks signal_pending, so you
could have a case where the signal is injected but you're not woken up.

Admittedly I am not familiar with how TASK_* work under the hood, but it
does seem to be like that.


I should have looked closer here - I was thinking about the return value
of rcuwait_wait_event. Yes, that signal_pending check you mention makes
the sleep semantics change bogus as interruptible is no longer just to
avoid contributing to the load balance.

And yes, unfortunately adding prepare_to and finish_rcuwait() looks like the
most reasonable approach to keeping the tracepoint semantics. I also considered
extending rcuwait_wait_event() by another parameter to pass back to the caller
if there was any wait at all, but that enlarges the call and is probably less
generic.

I'll send another version keeping the current sleep and tracepoint semantics.

Thanks,
Davidlohr
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v2 8/8] arm64: cpufeature: Add an overview comment for the cpufeature framework

2020-04-21 Thread Suzuki K Poulose

On 04/21/2020 03:29 PM, Will Deacon wrote:

Now that Suzuki isn't within throwing distance, I thought I'd better add
a rough overview comment to cpufeature.c so that it doesn't take me days
to remember how it works next time.

Signed-off-by: Will Deacon 
---


Reviewed-by: Suzuki K Poulose 
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v2 6/8] arm64: cpufeature: Relax AArch32 system checks if EL1 is 64-bit only

2020-04-21 Thread Will Deacon
If AArch32 is not supported at EL1, the AArch32 feature register fields
no longer advertise support for some system features:

  * ISAR4.SMC
  * PFR1.{Virt_frac, Sec_frac, Virtualization, Security, ProgMod}

In which case, we don't need to emit "SANITY CHECK" failures for all of
them.

Add logic to relax the strictness of individual feature register fields
at runtime and use this for the fields above if 32-bit EL1 is not
supported.

Reviewed-by: Suzuki K Poulose 
Tested-by: Sai Prakash Ranjan 
Signed-off-by: Will Deacon 
---
 arch/arm64/include/asm/cpufeature.h |  7 ++
 arch/arm64/kernel/cpufeature.c  | 33 -
 2 files changed, 39 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/cpufeature.h 
b/arch/arm64/include/asm/cpufeature.h
index afe08251ff95..f5c4672e498b 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -551,6 +551,13 @@ static inline bool id_aa64mmfr0_mixed_endian_el0(u64 mmfr0)
cpuid_feature_extract_unsigned_field(mmfr0, 
ID_AA64MMFR0_BIGENDEL0_SHIFT) == 0x1;
 }
 
+static inline bool id_aa64pfr0_32bit_el1(u64 pfr0)
+{
+   u32 val = cpuid_feature_extract_unsigned_field(pfr0, 
ID_AA64PFR0_EL1_SHIFT);
+
+   return val == ID_AA64PFR0_EL1_32BIT_64BIT;
+}
+
 static inline bool id_aa64pfr0_32bit_el0(u64 pfr0)
 {
u32 val = cpuid_feature_extract_unsigned_field(pfr0, 
ID_AA64PFR0_EL0_SHIFT);
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 6892b2440676..7e0dbe2a2f2d 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -715,6 +715,25 @@ static int check_update_ftr_reg(u32 sys_id, int cpu, u64 
val, u64 boot)
return 1;
 }
 
+static void relax_cpu_ftr_reg(u32 sys_id, int field)
+{
+   const struct arm64_ftr_bits *ftrp;
+   struct arm64_ftr_reg *regp = get_arm64_ftr_reg(sys_id);
+
+   if (WARN_ON(!regp))
+   return;
+
+   for (ftrp = regp->ftr_bits; ftrp->width; ftrp++) {
+   if (ftrp->shift == field) {
+   regp->strict_mask &= ~arm64_ftr_mask(ftrp);
+   break;
+   }
+   }
+
+   /* Bogus field? */
+   WARN_ON(!ftrp->width);
+}
+
 static int update_32bit_cpu_features(int cpu, struct cpuinfo_arm64 *info,
 struct cpuinfo_arm64 *boot)
 {
@@ -729,6 +748,19 @@ static int update_32bit_cpu_features(int cpu, struct 
cpuinfo_arm64 *info,
if (!id_aa64pfr0_32bit_el0(pfr0))
return taint;
 
+   /*
+* If we don't have AArch32 at EL1, then relax the strictness of
+* EL1-dependent register fields to avoid spurious sanity check fails.
+*/
+   if (!id_aa64pfr0_32bit_el1(pfr0)) {
+   relax_cpu_ftr_reg(SYS_ID_ISAR4_EL1, ID_ISAR4_SMC_SHIFT);
+   relax_cpu_ftr_reg(SYS_ID_PFR1_EL1, ID_PFR1_VIRT_FRAC_SHIFT);
+   relax_cpu_ftr_reg(SYS_ID_PFR1_EL1, ID_PFR1_SEC_FRAC_SHIFT);
+   relax_cpu_ftr_reg(SYS_ID_PFR1_EL1, 
ID_PFR1_VIRTUALIZATION_SHIFT);
+   relax_cpu_ftr_reg(SYS_ID_PFR1_EL1, ID_PFR1_SECURITY_SHIFT);
+   relax_cpu_ftr_reg(SYS_ID_PFR1_EL1, ID_PFR1_PROGMOD_SHIFT);
+   }
+
taint |= check_update_ftr_reg(SYS_ID_DFR0_EL1, cpu,
  info->reg_id_dfr0, boot->reg_id_dfr0);
taint |= check_update_ftr_reg(SYS_ID_ISAR0_EL1, cpu,
@@ -773,7 +805,6 @@ static int update_32bit_cpu_features(int cpu, struct 
cpuinfo_arm64 *info,
return taint;
 }
 
-
 /*
  * Update system wide CPU feature registers with the values from a
  * non-boot CPU. Also performs SANITY checks to make sure that there
-- 
2.26.1.301.g55bc3eb7cb9-goog

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v2 3/8] arm64: cpufeature: Add CPU capability for AArch32 EL1 support

2020-04-21 Thread Will Deacon
Although we emit a "SANITY CHECK" warning and taint the kernel if we
detect a CPU mismatch for AArch32 support at EL1, we still online the
CPU with disastrous consequences for any running 32-bit VMs.

Introduce a capability for AArch32 support at EL1 so that late onlining
of incompatible CPUs is forbidden.

Acked-by: Marc Zyngier 
Reviewed-by: Suzuki K Poulose 
Tested-by: Sai Prakash Ranjan 
Signed-off-by: Will Deacon 
---
 arch/arm64/include/asm/cpucaps.h |  3 ++-
 arch/arm64/include/asm/sysreg.h  |  1 +
 arch/arm64/kernel/cpufeature.c   | 12 
 arch/arm64/kvm/reset.c   | 12 ++--
 4 files changed, 17 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index 8eb5a088ae65..c54c674e6c21 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -61,7 +61,8 @@
 #define ARM64_HAS_AMU_EXTN 51
 #define ARM64_HAS_ADDRESS_AUTH 52
 #define ARM64_HAS_GENERIC_AUTH 53
+#define ARM64_HAS_32BIT_EL154
 
-#define ARM64_NCAPS54
+#define ARM64_NCAPS55
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index d7181972d28d..c4e896bf77f3 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -655,6 +655,7 @@
 #define ID_AA64PFR0_ASIMD_NI   0xf
 #define ID_AA64PFR0_ASIMD_SUPPORTED0x0
 #define ID_AA64PFR0_EL1_64BIT_ONLY 0x1
+#define ID_AA64PFR0_EL1_32BIT_64BIT0x2
 #define ID_AA64PFR0_EL0_64BIT_ONLY 0x1
 #define ID_AA64PFR0_EL0_32BIT_64BIT0x2
 
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index b143f8bc6c52..838fe5cc8d7e 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1535,6 +1535,18 @@ static const struct arm64_cpu_capabilities 
arm64_features[] = {
.field_pos = ID_AA64PFR0_EL0_SHIFT,
.min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT,
},
+#ifdef CONFIG_KVM
+   {
+   .desc = "32-bit EL1 Support",
+   .capability = ARM64_HAS_32BIT_EL1,
+   .type = ARM64_CPUCAP_SYSTEM_FEATURE,
+   .matches = has_cpuid_feature,
+   .sys_reg = SYS_ID_AA64PFR0_EL1,
+   .sign = FTR_UNSIGNED,
+   .field_pos = ID_AA64PFR0_EL1_SHIFT,
+   .min_field_value = ID_AA64PFR0_EL1_32BIT_64BIT,
+   },
+#endif
{
.desc = "Kernel page table isolation (KPTI)",
.capability = ARM64_UNMAP_KERNEL_AT_EL0,
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 30b7ea680f66..102e5c4e01a0 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -46,14 +46,6 @@ static const struct kvm_regs default_regs_reset32 = {
PSR_AA32_I_BIT | PSR_AA32_F_BIT),
 };
 
-static bool cpu_has_32bit_el1(void)
-{
-   u64 pfr0;
-
-   pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
-   return !!(pfr0 & 0x20);
-}
-
 /**
  * kvm_arch_vm_ioctl_check_extension
  *
@@ -66,7 +58,7 @@ int kvm_arch_vm_ioctl_check_extension(struct kvm *kvm, long 
ext)
 
switch (ext) {
case KVM_CAP_ARM_EL1_32BIT:
-   r = cpu_has_32bit_el1();
+   r = cpus_have_const_cap(ARM64_HAS_32BIT_EL1);
break;
case KVM_CAP_GUEST_DEBUG_HW_BPS:
r = get_num_brps();
@@ -288,7 +280,7 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
switch (vcpu->arch.target) {
default:
if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) {
-   if (!cpu_has_32bit_el1())
+   if (!cpus_have_const_cap(ARM64_HAS_32BIT_EL1))
goto out;
cpu_reset = _regs_reset32;
} else {
-- 
2.26.1.301.g55bc3eb7cb9-goog

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v2 8/8] arm64: cpufeature: Add an overview comment for the cpufeature framework

2020-04-21 Thread Will Deacon
Now that Suzuki isn't within throwing distance, I thought I'd better add
a rough overview comment to cpufeature.c so that it doesn't take me days
to remember how it works next time.

Signed-off-by: Will Deacon 
---
 arch/arm64/kernel/cpufeature.c | 50 ++
 1 file changed, 50 insertions(+)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index d63653d7c5d0..c1d44d127baa 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -3,6 +3,56 @@
  * Contains CPU feature definitions
  *
  * Copyright (C) 2015 ARM Ltd.
+ *
+ * A note for the weary kernel hacker: the code here is confusing and hard to
+ * follow! That's partly because it's solving a nasty problem, but also because
+ * there's a little bit of over-abstraction that tends to obscure what's going
+ * on behind a maze of helper functions and macros.
+ *
+ * The basic problem is that hardware folks have started gluing together CPUs
+ * with distinct architectural features; in some cases even creating SoCs where
+ * user-visible instructions are available only on a subset of the available
+ * cores. We try to address this by snapshotting the feature registers of the
+ * boot CPU and comparing these with the feature registers of each secondary
+ * CPU when bringing them up. If there is a mismatch, then we update the
+ * snapshot state to indicate the lowest-common denominator of the feature,
+ * known as the "safe" value. This snapshot state can be queried to view the
+ * "sanitised" value of a feature register.
+ *
+ * The sanitised register values are used to decide which capabilities we
+ * have in the system. These may be in the form of traditional "hwcaps"
+ * advertised to userspace or internal "cpucaps" which are used to configure
+ * things like alternative patching and static keys. While a feature mismatch
+ * may result in a TAINT_CPU_OUT_OF_SPEC kernel taint, a capability mismatch
+ * may prevent a CPU from being onlined at all.
+ *
+ * Some implementation details worth remembering:
+ *
+ * - Mismatched features are *always* sanitised to a "safe" value, which
+ *   usually indicates that the feature is not supported.
+ *
+ * - A mismatched feature marked with FTR_STRICT will cause a "SANITY CHECK"
+ *   warning when onlining an offending CPU and the kernel will be tainted
+ *   with TAINT_CPU_OUT_OF_SPEC.
+ *
+ * - Features marked as FTR_VISIBLE have their sanitised value visible to
+ *   userspace. FTR_VISIBLE features in registers that are only visible
+ *   to EL0 by trapping *must* have a corresponding HWCAP so that late
+ *   onlining of CPUs cannot lead to features disappearing at runtime.
+ *
+ * - A "feature" is typically a 4-bit register field. A "capability" is the
+ *   high-level description derived from the sanitised field value.
+ *
+ * - Read the Arm ARM (DDI 0487F.a) section D13.1.3 ("Principles of the ID
+ *   scheme for fields in ID registers") to understand when feature fields
+ *   may be signed or unsigned (FTR_SIGNED and FTR_UNSIGNED accordingly).
+ *
+ * - KVM exposes its own view of the feature registers to guest operating
+ *   systems regardless of FTR_VISIBLE. This is typically driven from the
+ *   sanitised register values to allow virtual CPUs to be migrated between
+ *   arbitrary physical CPUs, but some features not present on the host are
+ *   also advertised and emulated. Look at sys_reg_descs[] for the gory
+ *   details.
  */
 
 #define pr_fmt(fmt) "CPU features: " fmt
-- 
2.26.1.301.g55bc3eb7cb9-goog

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v2 0/8] Relax sanity checking for mismatched AArch32 EL1

2020-04-21 Thread Will Deacon
Hi folks,

This is version two of the patches I previously posted here:

https://lore.kernel.org/lkml/20200414213114.2378-1-w...@kernel.org/

Changes since v1 include:

  * Typo fixes
  * Added comment to update_32bit_cpu_features() callsite regarding sanitisation
  * Extended comment in final patch to mention KVM
  * Add acks/tested-bys

Cheers,

Will

Cc: Suzuki K Poulose 
Cc: Mark Rutland 
Cc: Marc Zyngier 
Cc: Anshuman Khandual 
Cc: Catalin Marinas 
Cc: Sai Prakash Ranjan 
Cc: Doug Anderson 
Cc: kernel-t...@android.com

--->8

Sai Prakash Ranjan (1):
  arm64: cpufeature: Relax check for IESB support

Will Deacon (7):
  arm64: cpufeature: Spell out register fields for ID_ISAR4 and ID_PFR1
  arm64: cpufeature: Add CPU capability for AArch32 EL1 support
  arm64: cpufeature: Remove redundant call to id_aa64pfr0_32bit_el0()
  arm64: cpufeature: Factor out checking of AArch32 features
  arm64: cpufeature: Relax AArch32 system checks if EL1 is 64-bit only
  arm64: cpufeature: Relax checks for AArch32 support at EL[0-2]
  arm64: cpufeature: Add an overview comment for the cpufeature
framework

 arch/arm64/include/asm/cpucaps.h|   3 +-
 arch/arm64/include/asm/cpufeature.h |   7 +
 arch/arm64/include/asm/sysreg.h |  18 ++
 arch/arm64/kernel/cpufeature.c  | 247 +---
 arch/arm64/kvm/reset.c  |  12 +-
 5 files changed, 217 insertions(+), 70 deletions(-)

-- 
2.26.1.301.g55bc3eb7cb9-goog

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v2 4/8] arm64: cpufeature: Remove redundant call to id_aa64pfr0_32bit_el0()

2020-04-21 Thread Will Deacon
There's no need to call id_aa64pfr0_32bit_el0() twice because the
sanitised value of ID_AA64PFR0_EL1 has already been updated for the CPU
being onlined.

Remove the redundant function call.

Reviewed-by: Suzuki K Poulose 
Tested-by: Sai Prakash Ranjan 
Signed-off-by: Will Deacon 
---
 arch/arm64/kernel/cpufeature.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 838fe5cc8d7e..7dfcdd9e75c1 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -792,9 +792,7 @@ void update_cpu_features(int cpu,
 * If we have AArch32, we care about 32-bit features for compat.
 * If the system doesn't support AArch32, don't update them.
 */
-   if (id_aa64pfr0_32bit_el0(read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1)) 
&&
-   id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) {
-
+   if (id_aa64pfr0_32bit_el0(read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1))) 
{
taint |= check_update_ftr_reg(SYS_ID_DFR0_EL1, cpu,
info->reg_id_dfr0, boot->reg_id_dfr0);
taint |= check_update_ftr_reg(SYS_ID_ISAR0_EL1, cpu,
-- 
2.26.1.301.g55bc3eb7cb9-goog

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v2 2/8] arm64: cpufeature: Spell out register fields for ID_ISAR4 and ID_PFR1

2020-04-21 Thread Will Deacon
In preparation for runtime updates to the strictness of some AArch32
features, spell out the register fields for ID_ISAR4 and ID_PFR1 to make
things clearer to read. Note that this isn't functionally necessary, as
the feature arrays themselves are not modified dynamically and remain
'const'.

Reviewed-by: Suzuki K Poulose 
Tested-by: Sai Prakash Ranjan 
Signed-off-by: Will Deacon 
---
 arch/arm64/include/asm/sysreg.h | 17 +
 arch/arm64/kernel/cpufeature.c  | 28 ++--
 2 files changed, 43 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index c4ac0ac25a00..d7181972d28d 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -752,6 +752,15 @@
 
 #define ID_DFR0_PERFMON_8_10x4
 
+#define ID_ISAR4_SWP_FRAC_SHIFT28
+#define ID_ISAR4_PSR_M_SHIFT   24
+#define ID_ISAR4_SYNCH_PRIM_FRAC_SHIFT 20
+#define ID_ISAR4_BARRIER_SHIFT 16
+#define ID_ISAR4_SMC_SHIFT 12
+#define ID_ISAR4_WRITEBACK_SHIFT   8
+#define ID_ISAR4_WITHSHIFTS_SHIFT  4
+#define ID_ISAR4_UNPRIV_SHIFT  0
+
 #define ID_ISAR5_RDM_SHIFT 24
 #define ID_ISAR5_CRC32_SHIFT   16
 #define ID_ISAR5_SHA2_SHIFT12
@@ -785,6 +794,14 @@
 #define MVFR1_FPDNAN_SHIFT 4
 #define MVFR1_FPFTZ_SHIFT  0
 
+#define ID_PFR1_GIC_SHIFT  28
+#define ID_PFR1_VIRT_FRAC_SHIFT24
+#define ID_PFR1_SEC_FRAC_SHIFT 20
+#define ID_PFR1_GENTIMER_SHIFT 16
+#define ID_PFR1_VIRTUALIZATION_SHIFT   12
+#define ID_PFR1_MPROGMOD_SHIFT 8
+#define ID_PFR1_SECURITY_SHIFT 4
+#define ID_PFR1_PROGMOD_SHIFT  0
 
 #define ID_AA64MMFR0_TGRAN4_SHIFT  28
 #define ID_AA64MMFR0_TGRAN64_SHIFT 24
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 63df28e6a425..b143f8bc6c52 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -332,6 +332,18 @@ static const struct arm64_ftr_bits ftr_id_mmfr4[] = {
ARM64_FTR_END,
 };
 
+static const struct arm64_ftr_bits ftr_id_isar4[] = {
+   ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
ID_ISAR4_SWP_FRAC_SHIFT, 4, 0),
+   ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
ID_ISAR4_PSR_M_SHIFT, 4, 0),
+   ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
ID_ISAR4_SYNCH_PRIM_FRAC_SHIFT, 4, 0),
+   ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
ID_ISAR4_BARRIER_SHIFT, 4, 0),
+   ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
ID_ISAR4_SMC_SHIFT, 4, 0),
+   ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
ID_ISAR4_WRITEBACK_SHIFT, 4, 0),
+   ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
ID_ISAR4_WITHSHIFTS_SHIFT, 4, 0),
+   ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
ID_ISAR4_UNPRIV_SHIFT, 4, 0),
+   ARM64_FTR_END,
+};
+
 static const struct arm64_ftr_bits ftr_id_isar6[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
ID_ISAR6_I8MM_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
ID_ISAR6_BF16_SHIFT, 4, 0),
@@ -351,6 +363,18 @@ static const struct arm64_ftr_bits ftr_id_pfr0[] = {
ARM64_FTR_END,
 };
 
+static const struct arm64_ftr_bits ftr_id_pfr1[] = {
+   ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
ID_PFR1_GIC_SHIFT, 4, 0),
+   ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
ID_PFR1_VIRT_FRAC_SHIFT, 4, 0),
+   ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
ID_PFR1_SEC_FRAC_SHIFT, 4, 0),
+   ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
ID_PFR1_GENTIMER_SHIFT, 4, 0),
+   ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
ID_PFR1_VIRTUALIZATION_SHIFT, 4, 0),
+   ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
ID_PFR1_MPROGMOD_SHIFT, 4, 0),
+   ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
ID_PFR1_SECURITY_SHIFT, 4, 0),
+   ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
ID_PFR1_PROGMOD_SHIFT, 4, 0),
+   ARM64_FTR_END,
+};
+
 static const struct arm64_ftr_bits ftr_id_dfr0[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 28, 4, 0),
S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 24, 4, 0xf),   
/* PerfMon */
@@ -411,7 +435,7 @@ static const struct __ftr_reg_entry {
 
/* Op1 = 0, CRn = 0, CRm = 1 */
ARM64_FTR_REG(SYS_ID_PFR0_EL1, ftr_id_pfr0),
-   ARM64_FTR_REG(SYS_ID_PFR1_EL1, ftr_generic_32bits),
+   ARM64_FTR_REG(SYS_ID_PFR1_EL1, ftr_id_pfr1),
ARM64_FTR_REG(SYS_ID_DFR0_EL1, ftr_id_dfr0),
ARM64_FTR_REG(SYS_ID_MMFR0_EL1, ftr_id_mmfr0),
ARM64_FTR_REG(SYS_ID_MMFR1_EL1, ftr_generic_32bits),
@@ -423,7 +447,7 @@ static const struct __ftr_reg_entry {
ARM64_FTR_REG(SYS_ID_ISAR1_EL1, ftr_generic_32bits),

[PATCH v2 5/8] arm64: cpufeature: Factor out checking of AArch32 features

2020-04-21 Thread Will Deacon
update_cpu_features() is pretty large, so split out the checking of the
AArch32 features into a separate function and call it after checking the
AArch64 features.

Reviewed-by: Suzuki K Poulose 
Tested-by: Sai Prakash Ranjan 
Signed-off-by: Will Deacon 
---
 arch/arm64/kernel/cpufeature.c | 112 +++--
 1 file changed, 65 insertions(+), 47 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 7dfcdd9e75c1..6892b2440676 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -715,6 +715,65 @@ static int check_update_ftr_reg(u32 sys_id, int cpu, u64 
val, u64 boot)
return 1;
 }
 
+static int update_32bit_cpu_features(int cpu, struct cpuinfo_arm64 *info,
+struct cpuinfo_arm64 *boot)
+{
+   int taint = 0;
+   u64 pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
+
+   /*
+* If we don't have AArch32 at all then skip the checks entirely
+* as the register values may be UNKNOWN and we're not going to be
+* using them for anything.
+*/
+   if (!id_aa64pfr0_32bit_el0(pfr0))
+   return taint;
+
+   taint |= check_update_ftr_reg(SYS_ID_DFR0_EL1, cpu,
+ info->reg_id_dfr0, boot->reg_id_dfr0);
+   taint |= check_update_ftr_reg(SYS_ID_ISAR0_EL1, cpu,
+ info->reg_id_isar0, boot->reg_id_isar0);
+   taint |= check_update_ftr_reg(SYS_ID_ISAR1_EL1, cpu,
+ info->reg_id_isar1, boot->reg_id_isar1);
+   taint |= check_update_ftr_reg(SYS_ID_ISAR2_EL1, cpu,
+ info->reg_id_isar2, boot->reg_id_isar2);
+   taint |= check_update_ftr_reg(SYS_ID_ISAR3_EL1, cpu,
+ info->reg_id_isar3, boot->reg_id_isar3);
+   taint |= check_update_ftr_reg(SYS_ID_ISAR4_EL1, cpu,
+ info->reg_id_isar4, boot->reg_id_isar4);
+   taint |= check_update_ftr_reg(SYS_ID_ISAR5_EL1, cpu,
+ info->reg_id_isar5, boot->reg_id_isar5);
+   taint |= check_update_ftr_reg(SYS_ID_ISAR6_EL1, cpu,
+ info->reg_id_isar6, boot->reg_id_isar6);
+
+   /*
+* Regardless of the value of the AuxReg field, the AIFSR, ADFSR, and
+* ACTLR formats could differ across CPUs and therefore would have to
+* be trapped for virtualization anyway.
+*/
+   taint |= check_update_ftr_reg(SYS_ID_MMFR0_EL1, cpu,
+ info->reg_id_mmfr0, boot->reg_id_mmfr0);
+   taint |= check_update_ftr_reg(SYS_ID_MMFR1_EL1, cpu,
+ info->reg_id_mmfr1, boot->reg_id_mmfr1);
+   taint |= check_update_ftr_reg(SYS_ID_MMFR2_EL1, cpu,
+ info->reg_id_mmfr2, boot->reg_id_mmfr2);
+   taint |= check_update_ftr_reg(SYS_ID_MMFR3_EL1, cpu,
+ info->reg_id_mmfr3, boot->reg_id_mmfr3);
+   taint |= check_update_ftr_reg(SYS_ID_PFR0_EL1, cpu,
+ info->reg_id_pfr0, boot->reg_id_pfr0);
+   taint |= check_update_ftr_reg(SYS_ID_PFR1_EL1, cpu,
+ info->reg_id_pfr1, boot->reg_id_pfr1);
+   taint |= check_update_ftr_reg(SYS_MVFR0_EL1, cpu,
+ info->reg_mvfr0, boot->reg_mvfr0);
+   taint |= check_update_ftr_reg(SYS_MVFR1_EL1, cpu,
+ info->reg_mvfr1, boot->reg_mvfr1);
+   taint |= check_update_ftr_reg(SYS_MVFR2_EL1, cpu,
+ info->reg_mvfr2, boot->reg_mvfr2);
+
+   return taint;
+}
+
+
 /*
  * Update system wide CPU feature registers with the values from a
  * non-boot CPU. Also performs SANITY checks to make sure that there
@@ -788,53 +847,6 @@ void update_cpu_features(int cpu,
taint |= check_update_ftr_reg(SYS_ID_AA64ZFR0_EL1, cpu,
  info->reg_id_aa64zfr0, 
boot->reg_id_aa64zfr0);
 
-   /*
-* If we have AArch32, we care about 32-bit features for compat.
-* If the system doesn't support AArch32, don't update them.
-*/
-   if (id_aa64pfr0_32bit_el0(read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1))) 
{
-   taint |= check_update_ftr_reg(SYS_ID_DFR0_EL1, cpu,
-   info->reg_id_dfr0, boot->reg_id_dfr0);
-   taint |= check_update_ftr_reg(SYS_ID_ISAR0_EL1, cpu,
-   info->reg_id_isar0, boot->reg_id_isar0);
-   taint |= check_update_ftr_reg(SYS_ID_ISAR1_EL1, cpu,
-   info->reg_id_isar1, boot->reg_id_isar1);
-   taint |= check_update_ftr_reg(SYS_ID_ISAR2_EL1, cpu,
-   info->reg_id_isar2, 

[PATCH v2 7/8] arm64: cpufeature: Relax checks for AArch32 support at EL[0-2]

2020-04-21 Thread Will Deacon
We don't need to be quite as strict about mismatched AArch32 support,
which is good because the friendly hardware folks have been busy
mismatching this to their hearts' content.

  * We don't care about EL2 or EL3 (there are silly comments concerning
the latter, so remove those)

  * EL1 support is gated by the ARM64_HAS_32BIT_EL1 capability and handled
gracefully when a mismatch occurs

  * EL0 support is gated by the ARM64_HAS_32BIT_EL0 capability and handled
gracefully when a mismatch occurs

Relax the AArch32 checks to FTR_NONSTRICT.

Reviewed-by: Suzuki K Poulose 
Tested-by: Sai Prakash Ranjan 
Signed-off-by: Will Deacon 
---
 arch/arm64/kernel/cpufeature.c | 10 +++---
 1 file changed, 3 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 7e0dbe2a2f2d..d63653d7c5d0 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -172,11 +172,10 @@ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
ID_AA64PFR0_GIC_SHIFT, 4, 0),
S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI),
S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI),
-   /* Linux doesn't care about the EL3 */
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, 
ID_AA64PFR0_EL3_SHIFT, 4, 0),
-   ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
ID_AA64PFR0_EL2_SHIFT, 4, 0),
-   ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_EL1_64BIT_ONLY),
-   ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_EL0_64BIT_ONLY),
+   ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, 
ID_AA64PFR0_EL2_SHIFT, 4, 0),
+   ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, 
ID_AA64PFR0_EL1_SHIFT, 4, ID_AA64PFR0_EL1_64BIT_ONLY),
+   ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, 
ID_AA64PFR0_EL0_SHIFT, 4, ID_AA64PFR0_EL0_64BIT_ONLY),
ARM64_FTR_END,
 };
 
@@ -867,9 +866,6 @@ void update_cpu_features(int cpu,
taint |= check_update_ftr_reg(SYS_ID_AA64MMFR2_EL1, cpu,
  info->reg_id_aa64mmfr2, 
boot->reg_id_aa64mmfr2);
 
-   /*
-* EL3 is not our concern.
-*/
taint |= check_update_ftr_reg(SYS_ID_AA64PFR0_EL1, cpu,
  info->reg_id_aa64pfr0, 
boot->reg_id_aa64pfr0);
taint |= check_update_ftr_reg(SYS_ID_AA64PFR1_EL1, cpu,
-- 
2.26.1.301.g55bc3eb7cb9-goog

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v2 1/8] arm64: cpufeature: Relax check for IESB support

2020-04-21 Thread Will Deacon
From: Sai Prakash Ranjan 

We don't care if IESB is supported or not as we always set
SCTLR_ELx.IESB and, if it works, that's really great.

Relax the ID_AA64MMFR2.IESB cpufeature check so that we don't warn and
taint if it's mismatched.

Reviewed-by: Suzuki K Poulose 
Signed-off-by: Sai Prakash Ranjan 
[will: rewrote commit message]
Signed-off-by: Will Deacon 
---
 arch/arm64/kernel/cpufeature.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 9fac745aa7bb..63df28e6a425 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -247,7 +247,7 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
ID_AA64MMFR2_FWB_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 
ID_AA64MMFR2_AT_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
ID_AA64MMFR2_LVA_SHIFT, 4, 0),
-   ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
ID_AA64MMFR2_IESB_SHIFT, 4, 0),
+   ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, 
ID_AA64MMFR2_IESB_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
ID_AA64MMFR2_LSM_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
ID_AA64MMFR2_UAO_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, 
ID_AA64MMFR2_CNP_SHIFT, 4, 0),
-- 
2.26.1.301.g55bc3eb7cb9-goog

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v4 3/4] KVM: arm64: Change CONFIG_KVM to a menuconfig entry

2020-04-21 Thread Fuad Tabba
From: Will Deacon 

Changing CONFIG_KVM to be a 'menuconfig' entry in Kconfig mean that we
can straightforwardly enumerate optional features, such as the virtual
PMU device as dependent options.

Signed-off-by: Will Deacon 
Signed-off-by: Fuad Tabba 
---
 arch/arm64/kvm/Kconfig | 16 +++-
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index d2cf4f099454..f1c1f981482c 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -3,7 +3,6 @@
 # KVM configuration
 #
 
-source "virt/kvm/Kconfig"
 source "virt/lib/Kconfig"
 
 menuconfig VIRTUALIZATION
@@ -18,7 +17,7 @@ menuconfig VIRTUALIZATION
 
 if VIRTUALIZATION
 
-config KVM
+menuconfig KVM
bool "Kernel-based Virtual Machine (KVM) support"
depends on OF
# for TASKSTATS/TASK_DELAY_ACCT:
@@ -33,7 +32,6 @@ config KVM
select KVM_VFIO
select HAVE_KVM_EVENTFD
select HAVE_KVM_IRQFD
-   select KVM_ARM_PMU if HW_PERF_EVENTS
select HAVE_KVM_MSI
select HAVE_KVM_IRQCHIP
select HAVE_KVM_IRQ_ROUTING
@@ -47,13 +45,21 @@ config KVM
 
  If unsure, say N.
 
+if KVM
+
+source "virt/kvm/Kconfig"
+
 config KVM_ARM_PMU
-   bool
+   bool "Virtual Performance Monitoring Unit (PMU) support"
+   depends on HW_PERF_EVENTS
+   default y
---help---
  Adds support for a virtual Performance Monitoring Unit (PMU) in
  virtual machines.
 
 config KVM_INDIRECT_VECTORS
-   def_bool KVM && (HARDEN_BRANCH_PREDICTOR || HARDEN_EL2_VECTORS)
+   def_bool HARDEN_BRANCH_PREDICTOR || HARDEN_EL2_VECTORS
+
+endif # KVM
 
 endif # VIRTUALIZATION
-- 
2.26.1.301.g55bc3eb7cb9-goog

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v4 4/4] KVM: arm64: Clean up kvm makefiles

2020-04-21 Thread Fuad Tabba
Consolidate references to the CONFIG_KVM configuration item to encompass
entire folders rather than per line.

Signed-off-by: Fuad Tabba 
---
 arch/arm64/kvm/Makefile | 40 -
 arch/arm64/kvm/hyp/Makefile | 15 --
 2 files changed, 17 insertions(+), 38 deletions(-)

diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 419696e615b3..5354ca1b1bfb 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -10,30 +10,16 @@ KVM=../../../virt/kvm
 obj-$(CONFIG_KVM) += kvm.o
 obj-$(CONFIG_KVM) += hyp/
 
-kvm-$(CONFIG_KVM) += $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o
-kvm-$(CONFIG_KVM) += $(KVM)/eventfd.o $(KVM)/vfio.o $(KVM)/irqchip.o
-kvm-$(CONFIG_KVM) += arm.o mmu.o mmio.o
-kvm-$(CONFIG_KVM) += psci.o perf.o
-kvm-$(CONFIG_KVM) += hypercalls.o
-kvm-$(CONFIG_KVM) += pvtime.o
-
-kvm-$(CONFIG_KVM) += inject_fault.o regmap.o va_layout.o
-kvm-$(CONFIG_KVM) += hyp.o hyp-init.o handle_exit.o
-kvm-$(CONFIG_KVM) += guest.o debug.o reset.o sys_regs.o sys_regs_generic_v8.o
-kvm-$(CONFIG_KVM) += vgic-sys-reg-v3.o fpsimd.o pmu.o
-kvm-$(CONFIG_KVM) += aarch32.o
-kvm-$(CONFIG_KVM) += arch_timer.o
-kvm-$(CONFIG_KVM_ARM_PMU)  += pmu-emul.o
-
-kvm-$(CONFIG_KVM) += vgic/vgic.o
-kvm-$(CONFIG_KVM) += vgic/vgic-init.o
-kvm-$(CONFIG_KVM) += vgic/vgic-irqfd.o
-kvm-$(CONFIG_KVM) += vgic/vgic-v2.o
-kvm-$(CONFIG_KVM) += vgic/vgic-v3.o
-kvm-$(CONFIG_KVM) += vgic/vgic-v4.o
-kvm-$(CONFIG_KVM) += vgic/vgic-mmio.o
-kvm-$(CONFIG_KVM) += vgic/vgic-mmio-v2.o
-kvm-$(CONFIG_KVM) += vgic/vgic-mmio-v3.o
-kvm-$(CONFIG_KVM) += vgic/vgic-kvm-device.o
-kvm-$(CONFIG_KVM) += vgic/vgic-its.o
-kvm-$(CONFIG_KVM) += vgic/vgic-debug.o
+kvm-y := $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o \
+$(KVM)/vfio.o $(KVM)/irqchip.o \
+arm.o mmu.o mmio.o psci.o perf.o hypercalls.o pvtime.o \
+inject_fault.o regmap.o va_layout.o hyp.o hyp-init.o handle_exit.o \
+guest.o debug.o reset.o sys_regs.o sys_regs_generic_v8.o \
+vgic-sys-reg-v3.o fpsimd.o pmu.o pmu-emul.o \
+aarch32.o arch_timer.o \
+vgic/vgic.o vgic/vgic-init.o \
+vgic/vgic-irqfd.o vgic/vgic-v2.o \
+vgic/vgic-v3.o vgic/vgic-v4.o \
+vgic/vgic-mmio.o vgic/vgic-mmio-v2.o \
+vgic/vgic-mmio-v3.o vgic/vgic-kvm-device.o \
+vgic/vgic-its.o vgic/vgic-debug.o
diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile
index 8229e47ba870..529aecbd0231 100644
--- a/arch/arm64/kvm/hyp/Makefile
+++ b/arch/arm64/kvm/hyp/Makefile
@@ -6,17 +6,10 @@
 ccflags-y += -fno-stack-protector -DDISABLE_BRANCH_PROFILING \
$(DISABLE_STACKLEAK_PLUGIN)
 
-obj-$(CONFIG_KVM) += vgic-v3-sr.o
-obj-$(CONFIG_KVM) += timer-sr.o
-obj-$(CONFIG_KVM) += aarch32.o
-obj-$(CONFIG_KVM) += vgic-v2-cpuif-proxy.o
-obj-$(CONFIG_KVM) += sysreg-sr.o
-obj-$(CONFIG_KVM) += debug-sr.o
-obj-$(CONFIG_KVM) += entry.o
-obj-$(CONFIG_KVM) += switch.o
-obj-$(CONFIG_KVM) += fpsimd.o
-obj-$(CONFIG_KVM) += tlb.o
-obj-$(CONFIG_KVM) += hyp-entry.o
+obj-$(CONFIG_KVM) += hyp.o
+
+hyp-y := vgic-v3-sr.o timer-sr.o aarch32.o vgic-v2-cpuif-proxy.o sysreg-sr.o \
+debug-sr.o entry.o switch.o fpsimd.o tlb.o hyp-entry.o
 
 # KVM code is run at a different exception code with a different map, so
 # compiler instrumentation that inserts callbacks or checks into the code may
-- 
2.26.1.301.g55bc3eb7cb9-goog

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v4 0/4] KVM: arm64: Tidy up arch Kconfig and Makefiles

2020-04-21 Thread Fuad Tabba
Hi,

This small patch series tidies up the arm64 KVM build system by
rejigging config options, removing some redundant help text, and
consolidating some of the Makefile rules.

The changes are cosmetic, but it seemed worthwhile to send this out
for consideration.  This series is a refrensh on top of 5.7-rc1.
It depends on Marc's kvm-arm64/welcome-home branch [1] plus the fix
from Will [2].

Changes since V3 [3]:
  * Rebased on top of Will's fix [2].
  * Added S-o-B to patches written by others.

Cheers,
/fuad

[1]
https://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git/log/?h=kvm-arm64/welcome-home

[2]
https://lists.cs.columbia.edu/pipermail/kvmarm/2020-April/040336.html

[3]
https://lists.cs.columbia.edu/pipermail/kvmarm/2020-April/040288.html


Fuad Tabba (1):
  KVM: arm64: Clean up kvm makefiles

Will Deacon (3):
  KVM: arm64: Kill off CONFIG_KVM_ARM_HOST
  KVM: arm64: Update help text
  KVM: arm64: Change CONFIG_KVM to a menuconfig entry

 arch/arm64/kernel/asm-offsets.c |  2 +-
 arch/arm64/kernel/cpu_errata.c  |  2 +-
 arch/arm64/kernel/smp.c |  2 +-
 arch/arm64/kvm/Kconfig  | 22 -
 arch/arm64/kvm/Makefile | 44 +++--
 arch/arm64/kvm/hyp/Makefile | 15 +++
 6 files changed, 32 insertions(+), 55 deletions(-)

-- 
2.26.1.301.g55bc3eb7cb9-goog

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v4 1/4] KVM: arm64: Kill off CONFIG_KVM_ARM_HOST

2020-04-21 Thread Fuad Tabba
From: Will Deacon 

CONFIG_KVM_ARM_HOST is just a proxy for CONFIG_KVM, so remove it in favour
of the latter.

Signed-off-by: Will Deacon 
Signed-off-by: Fuad Tabba 
---
 arch/arm64/kernel/asm-offsets.c |  2 +-
 arch/arm64/kernel/cpu_errata.c  |  2 +-
 arch/arm64/kernel/smp.c |  2 +-
 arch/arm64/kvm/Kconfig  |  6 
 arch/arm64/kvm/Makefile | 52 -
 arch/arm64/kvm/hyp/Makefile | 22 +++---
 6 files changed, 40 insertions(+), 46 deletions(-)

diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 9981a0a5a87f..a27e0cd731e9 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -96,7 +96,7 @@ int main(void)
   DEFINE(CPU_BOOT_PTRAUTH_KEY, offsetof(struct secondary_data, ptrauth_key));
 #endif
   BLANK();
-#ifdef CONFIG_KVM_ARM_HOST
+#ifdef CONFIG_KVM
   DEFINE(VCPU_CONTEXT, offsetof(struct kvm_vcpu, arch.ctxt));
   DEFINE(VCPU_FAULT_DISR,  offsetof(struct kvm_vcpu, arch.fault.disr_el1));
   DEFINE(VCPU_WORKAROUND_FLAGS,offsetof(struct kvm_vcpu, 
arch.workaround_flags));
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index df56d2295d16..a102321fc8a2 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -234,7 +234,7 @@ static int detect_harden_bp_fw(void)
smccc_end = NULL;
break;
 
-#if IS_ENABLED(CONFIG_KVM_ARM_HOST)
+#if IS_ENABLED(CONFIG_KVM)
case SMCCC_CONDUIT_SMC:
cb = call_smc_arch_workaround_1;
smccc_start = __smccc_workaround_1_smc;
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index 061f60fe452f..0a3045d9f33f 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -430,7 +430,7 @@ static void __init hyp_mode_check(void)
   "CPU: CPUs started in inconsistent modes");
else
pr_info("CPU: All CPU(s) started at EL1\n");
-   if (IS_ENABLED(CONFIG_KVM_ARM_HOST))
+   if (IS_ENABLED(CONFIG_KVM))
kvm_compute_layout();
 }
 
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index 449386d76441..ce724e526689 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -28,7 +28,6 @@ config KVM
select HAVE_KVM_CPU_RELAX_INTERCEPT
select HAVE_KVM_ARCH_TLB_FLUSH_ALL
select KVM_MMIO
-   select KVM_ARM_HOST
select KVM_GENERIC_DIRTYLOG_READ_PROTECT
select SRCU
select KVM_VFIO
@@ -50,11 +49,6 @@ config KVM
 
  If unsure, say N.
 
-config KVM_ARM_HOST
-   bool
-   ---help---
- Provides host support for ARM processors.
-
 config KVM_ARM_PMU
bool
---help---
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 7a3768538343..419696e615b3 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -7,33 +7,33 @@ ccflags-y += -I $(srctree)/$(src)
 
 KVM=../../../virt/kvm
 
-obj-$(CONFIG_KVM_ARM_HOST) += kvm.o
-obj-$(CONFIG_KVM_ARM_HOST) += hyp/
+obj-$(CONFIG_KVM) += kvm.o
+obj-$(CONFIG_KVM) += hyp/
 
-kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o
-kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/eventfd.o $(KVM)/vfio.o $(KVM)/irqchip.o
-kvm-$(CONFIG_KVM_ARM_HOST) += arm.o mmu.o mmio.o
-kvm-$(CONFIG_KVM_ARM_HOST) += psci.o perf.o
-kvm-$(CONFIG_KVM_ARM_HOST) += hypercalls.o
-kvm-$(CONFIG_KVM_ARM_HOST) += pvtime.o
+kvm-$(CONFIG_KVM) += $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o
+kvm-$(CONFIG_KVM) += $(KVM)/eventfd.o $(KVM)/vfio.o $(KVM)/irqchip.o
+kvm-$(CONFIG_KVM) += arm.o mmu.o mmio.o
+kvm-$(CONFIG_KVM) += psci.o perf.o
+kvm-$(CONFIG_KVM) += hypercalls.o
+kvm-$(CONFIG_KVM) += pvtime.o
 
-kvm-$(CONFIG_KVM_ARM_HOST) += inject_fault.o regmap.o va_layout.o
-kvm-$(CONFIG_KVM_ARM_HOST) += hyp.o hyp-init.o handle_exit.o
-kvm-$(CONFIG_KVM_ARM_HOST) += guest.o debug.o reset.o sys_regs.o 
sys_regs_generic_v8.o
-kvm-$(CONFIG_KVM_ARM_HOST) += vgic-sys-reg-v3.o fpsimd.o pmu.o
-kvm-$(CONFIG_KVM_ARM_HOST) += aarch32.o
-kvm-$(CONFIG_KVM_ARM_HOST) += arch_timer.o
+kvm-$(CONFIG_KVM) += inject_fault.o regmap.o va_layout.o
+kvm-$(CONFIG_KVM) += hyp.o hyp-init.o handle_exit.o
+kvm-$(CONFIG_KVM) += guest.o debug.o reset.o sys_regs.o sys_regs_generic_v8.o
+kvm-$(CONFIG_KVM) += vgic-sys-reg-v3.o fpsimd.o pmu.o
+kvm-$(CONFIG_KVM) += aarch32.o
+kvm-$(CONFIG_KVM) += arch_timer.o
 kvm-$(CONFIG_KVM_ARM_PMU)  += pmu-emul.o
 
-kvm-$(CONFIG_KVM_ARM_HOST) += vgic/vgic.o
-kvm-$(CONFIG_KVM_ARM_HOST) += vgic/vgic-init.o
-kvm-$(CONFIG_KVM_ARM_HOST) += vgic/vgic-irqfd.o
-kvm-$(CONFIG_KVM_ARM_HOST) += vgic/vgic-v2.o
-kvm-$(CONFIG_KVM_ARM_HOST) += vgic/vgic-v3.o
-kvm-$(CONFIG_KVM_ARM_HOST) += vgic/vgic-v4.o
-kvm-$(CONFIG_KVM_ARM_HOST) += vgic/vgic-mmio.o
-kvm-$(CONFIG_KVM_ARM_HOST) += vgic/vgic-mmio-v2.o
-kvm-$(CONFIG_KVM_ARM_HOST) += vgic/vgic-mmio-v3.o
-kvm-$(CONFIG_KVM_ARM_HOST) += vgic/vgic-kvm-device.o
-kvm-$(CONFIG_KVM_ARM_HOST) += 

[PATCH v4 2/4] KVM: arm64: Update help text

2020-04-21 Thread Fuad Tabba
From: Will Deacon 

arm64 KVM supports 16k pages since 02e0b7600f83
("arm64: kvm: Add support for 16K pages"), so update the Kconfig help
text accordingly.

Signed-off-by: Will Deacon 
Signed-off-by: Fuad Tabba 
---
 arch/arm64/kvm/Kconfig | 2 --
 1 file changed, 2 deletions(-)

diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index ce724e526689..d2cf4f099454 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -44,8 +44,6 @@ config KVM
select TASK_DELAY_ACCT
---help---
  Support hosting virtualized guest machines.
- We don't support KVM with 16K page tables yet, due to the multiple
- levels of fake page tables.
 
  If unsure, say N.
 
-- 
2.26.1.301.g55bc3eb7cb9-goog

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


RE: [PATCH v2 00/94] KVM: arm64: ARMv8.3/8.4 Nested Virtualization support

2020-04-21 Thread Zengtao (B)
> -Original Message-
> From: kvm-ow...@vger.kernel.org [mailto:kvm-ow...@vger.kernel.org]
> On Behalf Of Marc Zyngier
> Sent: Monday, April 20, 2020 10:03 PM
> To: Zengtao (B)
> Cc: George Cherian; dave.mar...@arm.com; alexandru.eli...@arm.com;
> andre.przyw...@arm.com; christoffer.d...@arm.com;
> james.mo...@arm.com; jint...@cs.columbia.edu;
> julien.thierry.k...@gmail.com; k...@vger.kernel.org;
> kvmarm@lists.cs.columbia.edu; linux-arm-ker...@lists.infradead.org;
> suzuki.poul...@arm.com; Anil Kumar Reddy H; Ganapatrao Kulkarni
> Subject: Re: [PATCH v2 00/94] KVM: arm64: ARMv8.3/8.4 Nested
> Virtualization support
> 
> On 2020-04-18 03:49, Zengtao (B) wrote:
> > -Original Message-
> >> From: Marc Zyngier [mailto:m...@kernel.org]
> >> Sent: Friday, April 17, 2020 11:06 PM
> >> To: Zengtao (B)
> >> Cc: George Cherian; dave.mar...@arm.com;
> alexandru.eli...@arm.com;
> >> andre.przyw...@arm.com; christoffer.d...@arm.com;
> >> james.mo...@arm.com; jint...@cs.columbia.edu;
> >> julien.thierry.k...@gmail.com; k...@vger.kernel.org;
> >> kvmarm@lists.cs.columbia.edu; linux-arm-ker...@lists.infradead.org;
> >> suzuki.poul...@arm.com; Anil Kumar Reddy H; Ganapatrao Kulkarni
> >> Subject: Re: [PATCH v2 00/94] KVM: arm64: ARMv8.3/8.4 Nested
> >> Virtualization support
> >>
> >> On Thu, 16 Apr 2020 19:22:21 +0100
> >> Marc Zyngier  wrote:
> >>
> >> > Hi Zengtao,
> >> >
> >> > On 2020-04-16 02:38, Zengtao (B) wrote:
> >> > > Hi Marc:
> >> > >
> >> > > Got it.
> >> > > Really a bit patch set :)
> >> >
> >> > Well, yeah... ;-)
> >> >
> >> > >
> >> > > BTW, I have done a basic kvm unit test
> >> > > git://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git
> >> > > And I find that after apply the patch KVM: arm64: VNCR-ize
> ELR_EL1,
> >> > > The psci test failed for some reason, I can't understand why, this
> >> > > is only the test result.(find the patch by git bisect + kvm test)
> >> >
> >> > That it is that mechanical, we should be able to quickly nail that
> one.
> >> >
> >> > > My platform: Hisilicon D06 board.
> >> > > Linux kernel: Linux 5.6-rc6 + nv patches(some rebases)
> >> > > Could you help to take a look?
> >> >
> >> > I'll have a look tomorrow. I'm in the middle of refactoring the series
> >> > for 5.7, and things have changed quite a bit. Hopefully this isn't a
> VHE
> >> > vs non-VHE issue.
> >>
> >> So I've repeatedly tried with the current state of the NV patches[1],
> >> on both an ARMv8.0 system (Seattle) and an ARMv8.2 pile of putrid
> junk
> >> (vim3l). PSCI is pretty happy, although I can only test with at most 8
> >> vcpus (GICv2 gets in the way).
> >>
> >> Can you please:
> >>
> >> - post the detailed error by running the PSCI unit test on its own
> > I tried to trace the error, and I found in kernel function
> > kvm_mpidr_to_vcpu,
> > casually, mpidr returns zero and we can't get the expected vcpu, and
> > psci
> >  test failed due to this.
> 
> Can you post the exact error message from the unit test?
> 
Some debug code added as follow(virt/kvm/arm/arm.c):

unsigned long saved_mpidr[256];

static void dump_saved_mpidr(struct kvm *kvm, unsigned long mpidr)
{
struct kvm_vcpu *vcpu;
int i;

printk("target mpidr:%lx\n", mpidr);
kvm_for_each_vcpu(i, vcpu, kvm) {
printk("saved_mpidr:%lx latest mpidr:%lx\n", saved_mpidr[i], 
kvm_vcpu_get_mpidr_aff(vcpu));
}
}

struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr)
{
struct kvm_vcpu *vcpu;
int i;

mpidr &= MPIDR_HWID_BITMASK;
kvm_for_each_vcpu(i, vcpu, kvm) {
saved_mpidr[i] = kvm_vcpu_get_mpidr_aff(vcpu);
if (mpidr == saved_mpidr[i])
return vcpu;
}

dump_saved_mpidr(kvm, mpidr);

return NULL;
}

error log:
[root@localhost test]# ./psci
BUILD_HEAD=b16df9ee
timeout -k 1s --foreground 90s /sbin/qemu-system-aarch64 -nodefaults 
-machine virt,gic-version=host,accel=kvm -cpu host -device virtio-serial-device
-device virtconsole,chardev=ctd -chardev testdev,id=ctd -device pci-testdev
-display none -serial stdio -kernel /tmp/tmp.QDQH5cBotg -smp 128
 # -initrd /tmp/tmp.uW5pheTN1jINFO: psci: PSCI version 1.0
PASS: psci: invalid-function
PASS: psci: affinity-info-on
PASS: psci: affinity-info-off
[  218.891944] target mpidr:1
[  218.894645] saved_mpidr:0 latest mpidr:0
[  218.898569] saved_mpidr:0 latest mpidr:1
[  218.902481] saved_mpidr:2 latest mpidr:2
[  218.906393] saved_mpidr:3 latest mpidr:3
[  218.910308] saved_mpidr:4 latest mpidr:4
[  218.914223] saved_mpidr:5 latest mpidr:5
[  218.918138] saved_mpidr:6 latest mpidr:6
[  218.922051] saved_mpidr:7 latest mpidr:7
[  218.925965] saved_mpidr:8 latest mpidr:8
[  218.929878] saved_mpidr:9 latest mpidr:9
[  218.933788] saved_mpidr:a latest mpidr:a
[  218.937703] saved_mpidr:b latest mpidr:b
[  218.941618] saved_mpidr:c latest mpidr:c
[  218.945533] saved_mpidr:d latest mpidr:d
[  218.949445] 

Re: [PATCH v2] kvm: Replace vcpu->swait with rcuwait

2020-04-21 Thread Davidlohr Bueso

On Mon, 20 Apr 2020, Marc Zyngier wrote:


This looks like a change in the semantics of the tracepoint. Before this
change, 'waited' would have been true if the vcpu waited at all. Here, 
you'd
have false if it has been interrupted by a signal, even if the vcpu 
has waited

for a period of time.


Hmm but sleeps are now uninterruptible as we're using TASK_IDLE.

Thanks,
Davidlohr
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v2 00/33] Documentation fixes for Kernel 5.8

2020-04-21 Thread Jonathan Corbet
On Tue, 14 Apr 2020 18:48:26 +0200
Mauro Carvalho Chehab  wrote:

> Patches 1 to 5 contain changes to the documentation toolset:
> 
> - The first 3 patches help to reduce a lot the number of reported
>   kernel-doc issues, by making the tool more smart.
> 
> - Patches 4 and 5 are meant to partially address the PDF
>   build, with now requires Sphinx version 2.4 or upper.
> 
> The remaining patches fix broken references detected by
> this tool:
> 
> ./scripts/documentation-file-ref-check
> 
> and address other random errors due to tags being mis-interpreted
> or mis-used.
> 
> They are independent each other, but some may depend on
> the kernel-doc improvements.
> 
> PS.: Due to the large number of C/C, I opted to keep a smaller
> set of C/C at this first e-mail (only e-mails with "L:" tag from
> MAINTAINERS file).

OK, I've applied this set, minus #17 which was applied elsewhere.

Thanks,

jon
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v2] kvm: Replace vcpu->swait with rcuwait

2020-04-21 Thread Davidlohr Bueso

On Mon, 20 Apr 2020, Paolo Bonzini wrote:


On 20/04/20 22:56, Davidlohr Bueso wrote:

On Mon, 20 Apr 2020, Marc Zyngier wrote:


This looks like a change in the semantics of the tracepoint. Before this
change, 'waited' would have been true if the vcpu waited at all. Here,
you'd
have false if it has been interrupted by a signal, even if the vcpu
has waited
for a period of time.


Hmm but sleeps are now uninterruptible as we're using TASK_IDLE.


Hold on, does that mean that you can't anymore send a signal in order to
kick a thread out of KVM_RUN?  Or am I just misunderstanding?


Considering that the return value of the interruptible wait is not
checked, I would not think this breaks KVM_RUN.

Thanks,
Davidlohr
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH kvmtool v3] Add emulation for CFI compatible flash memory

2020-04-21 Thread André Przywara
On 07/04/2020 16:15, Alexandru Elisei wrote:
> Hi,
> 
> I've tested this patch by running badblocks and fio on a flash device inside a
> guest, everything worked as expected.
> 
> I've also looked at the flowcharts for device operation from Intel Application
> Note 646, pages 12-21, and they seem implemented correctly.
> 
> A few minor issues below.

^^^

Slight understatement ;-)

> 
> On 2/21/20 4:55 PM, Andre Przywara wrote:
>> From: Raphael Gault 
>>
>> The EDK II UEFI firmware implementation requires some storage for the EFI
>> variables, which is typically some flash storage.
>> Since this is already supported on the EDK II side, we add a CFI flash
>> emulation to kvmtool.
>> This is backed by a file, specified via the --flash or -F command line
>> option. Any flash writes done by the guest will immediately be reflected
>> into this file (kvmtool mmap's the file).
>> The flash will be limited to the nearest power-of-2 size, so only the
>> first 2 MB of a 3 MB file will be used.
>>
>> This implements a CFI flash using the "Intel/Sharp extended command
>> set", as specified in:
>> - JEDEC JESD68.01
>> - JEDEC JEP137B
>> - Intel Application Note 646
>> Some gaps in those specs have been filled by looking at real devices and
>> other implementations (QEMU, Linux kernel driver).
>>
>> At the moment this relies on DT to advertise the base address of the
>> flash memory (mapped into the MMIO address space) and is only enabled
>> for ARM/ARM64. The emulation itself is architecture agnostic, though.
>>
>> This is one missing piece toward a working UEFI boot with kvmtool on
>> ARM guests, the other is to provide writable PCI BARs, which is WIP.
>>
>> Signed-off-by: Raphael Gault 
>> [Andre: rewriting and fixing]
>> Signed-off-by: Andre Przywra 
>> ---
>> Hi,
>>
>> an update fixing Alexandru's review comments (many thanks for those!)
>> The biggest change code-wise is the split of the MMIO handler into three
>> different functions. Another significant change is the rounding *down* of
>> the present flash file size to the nearest power-of-two, to match flash
>> hardware chips and Linux' expectations.
>>
>> Cheers,
>> Andre
>>
>> Changelog v2 .. v3:
>> - Breaking MMIO handling into three separate functions.
>> - Assing the flash base address in the memory map, but stay at 32 MB for now.
>>   The MMIO area has been moved up to 48 MB, to never overlap with the
>>   flash.
>> - Impose a limit of 16 MB for the flash size, mostly to fit into the
>>   (for now) fixed memory map.
>> - Trim flash size down to nearest power-of-2, to match hardware.
>> - Announce forced flash size trimming.
>> - Rework the CFI query table slightly, to add the addresses as array
>>   indicies.
>> - Fix error handling when creating the flash device.
>> - Fix pow2_size implementation for 0 and 1 as input values.
>> - Fix write buffer size handling.
>> - Improve some comments.
>>
>> Changelog v1 .. v2:
>> - Add locking for MMIO handling.
>> - Fold flash read into handler.
>> - Move pow2_size() into generic header.
>> - Spell out flash base address.
>>
>>  Makefile  |   6 +
>>  arm/include/arm-common/kvm-arch.h |   8 +-
>>  builtin-run.c |   2 +
>>  hw/cfi_flash.c| 576 ++
>>  include/kvm/kvm-config.h  |   1 +
>>  include/kvm/util.h|   8 +
>>  6 files changed, 599 insertions(+), 2 deletions(-)
>>  create mode 100644 hw/cfi_flash.c
>>
>> diff --git a/Makefile b/Makefile
>> index 3862112c..7ed6fb5e 100644
>> --- a/Makefile
>> +++ b/Makefile
>> @@ -170,6 +170,7 @@ ifeq ($(ARCH), arm)
>>  CFLAGS  += -march=armv7-a
>>  
>>  ARCH_WANT_LIBFDT := y
>> +ARCH_HAS_FLASH_MEM := y
>>  endif
>>  
>>  # ARM64
>> @@ -182,6 +183,7 @@ ifeq ($(ARCH), arm64)
>>  ARCH_INCLUDE+= -Iarm/aarch64/include
>>  
>>  ARCH_WANT_LIBFDT := y
>> +ARCH_HAS_FLASH_MEM := y
>>  endif
>>  
>>  ifeq ($(ARCH),mips)
>> @@ -261,6 +263,10 @@ ifeq (y,$(ARCH_HAS_FRAMEBUFFER))
>>  endif
>>  endif
>>  
>> +ifeq (y,$(ARCH_HAS_FLASH_MEM))
>> +OBJS+= hw/cfi_flash.o
>> +endif
>> +
>>  ifeq ($(call try-build,$(SOURCE_ZLIB),$(CFLAGS),$(LDFLAGS) -lz),y)
>>  CFLAGS_DYNOPT   += -DCONFIG_HAS_ZLIB
>>  LIBS_DYNOPT += -lz
>> diff --git a/arm/include/arm-common/kvm-arch.h 
>> b/arm/include/arm-common/kvm-arch.h
>> index b9d486d5..d84e50cd 100644
>> --- a/arm/include/arm-common/kvm-arch.h
>> +++ b/arm/include/arm-common/kvm-arch.h
>> @@ -8,7 +8,8 @@
>>  #include "arm-common/gic.h"
>>  
>>  #define ARM_IOPORT_AREA _AC(0x, UL)
>> -#define ARM_MMIO_AREA   _AC(0x0001, UL)
>> +#define ARM_FLASH_AREA  _AC(0x0200, UL)
>> +#define ARM_MMIO_AREA   _AC(0x0300, UL)
>>  #define ARM_AXI_AREA_AC(0x4000, UL)
>>  #define ARM_MEMORY_AREA _AC(0x8000, UL)
>>  
>> @@ -21,7 +22,10 @@
>>  #define 

Re: [RFC PATCH v11 5/9] psci: Add hypercall service for ptp_kvm.

2020-04-21 Thread Mark Rutland
On Tue, Apr 21, 2020 at 11:23:00AM +0800, Jianyong Wu wrote:
> ptp_kvm modules will get this service through smccc call.
> The service offers real time and counter cycle of host for guest.
> Also let caller determine which cycle of virtual counter or physical counter
> to return.
> 
> Signed-off-by: Jianyong Wu 
> ---
>  include/linux/arm-smccc.h | 21 +++
>  virt/kvm/arm/hypercalls.c | 44 ++-
>  2 files changed, 64 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
> index 59494df0f55b..747b7595d0c6 100644
> --- a/include/linux/arm-smccc.h
> +++ b/include/linux/arm-smccc.h
> @@ -77,6 +77,27 @@
>  ARM_SMCCC_SMC_32,\
>  0, 0x7fff)
>  
> +/* PTP KVM call requests clock time from guest OS to host */
> +#define ARM_SMCCC_HYP_KVM_PTP_FUNC_ID\
> + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
> +ARM_SMCCC_SMC_32,\
> +ARM_SMCCC_OWNER_STANDARD_HYP,\
> +0)
> +
> +/* request for virtual counter from ptp_kvm guest */
> +#define ARM_SMCCC_HYP_KVM_PTP_VIRT   \
> + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
> +ARM_SMCCC_SMC_32,\
> +ARM_SMCCC_OWNER_STANDARD_HYP,\
> +1)
> +
> +/* request for physical counter from ptp_kvm guest */
> +#define ARM_SMCCC_HYP_KVM_PTP_PHY\
> + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
> +ARM_SMCCC_SMC_32,\
> +ARM_SMCCC_OWNER_STANDARD_HYP,\
> +2)

ARM_SMCCC_OWNER_STANDARD_HYP is for standard calls as defined in SMCCC
and companion documents, so we should refer to the specific
documentation here. Where are these calls defined?

If these calls are Linux-specific then ARM_SMCCC_OWNER_STANDARD_HYP
isn't appropriate to use, as they are vendor-specific hypervisor service
call.

It looks like we don't currently have a ARM_SMCCC_OWNER_HYP for that
(which IIUC would be 6), but we can add one as necessary. I think that
Will might have added that as part of his SMCCC probing bits.

> +
>  #ifndef __ASSEMBLY__
>  
>  #include 
> diff --git a/virt/kvm/arm/hypercalls.c b/virt/kvm/arm/hypercalls.c
> index 550dfa3e53cd..a5309c28d4dc 100644
> --- a/virt/kvm/arm/hypercalls.c
> +++ b/virt/kvm/arm/hypercalls.c
> @@ -3,6 +3,7 @@
>  
>  #include 
>  #include 
> +#include 
>  
>  #include 
>  
> @@ -11,8 +12,11 @@
>  
>  int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
>  {
> - u32 func_id = smccc_get_function(vcpu);
> + struct system_time_snapshot systime_snapshot;
> + long arg[4];
> + u64 cycles;
>   long val = SMCCC_RET_NOT_SUPPORTED;
> + u32 func_id = smccc_get_function(vcpu);
>   u32 feature;
>   gpa_t gpa;
>  
> @@ -62,6 +66,44 @@ int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
>   if (gpa != GPA_INVALID)
>   val = gpa;
>   break;
> + /*
> +  * This serves virtual kvm_ptp.
> +  * Four values will be passed back.
> +  * reg0 stores high 32-bit host ktime;
> +  * reg1 stores low 32-bit host ktime;
> +  * reg2 stores high 32-bit difference of host cycles and cntvoff;
> +  * reg3 stores low 32-bit difference of host cycles and cntvoff.
> +  */
> + case ARM_SMCCC_HYP_KVM_PTP_FUNC_ID:

Shouldn't the host opt-in to providing this to the guest, as with other
features?

> + /*
> +  * system time and counter value must captured in the same
> +  * time to keep consistency and precision.
> +  */
> + ktime_get_snapshot(_snapshot);
> + if (systime_snapshot.cs_id != CSID_ARM_ARCH_COUNTER)
> + break;
> + arg[0] = upper_32_bits(systime_snapshot.real);
> + arg[1] = lower_32_bits(systime_snapshot.real);

Why exactly does the guest need the host's real time? Neither the cover
letter nor this commit message have explained that, and for those of us
unfamliar with PTP it would be very helpful to know that to understand
what's going on.

> + /*
> +  * which of virtual counter or physical counter being
> +  * asked for is decided by the first argument.
> +  */
> + feature = smccc_get_arg1(vcpu);
> + switch (feature) {
> + case ARM_SMCCC_HYP_KVM_PTP_PHY:
> + cycles = systime_snapshot.cycles;
> + break;
> + case ARM_SMCCC_HYP_KVM_PTP_VIRT:
> + default:
> + cycles = systime_snapshot.cycles -
> + vcpu_vtimer(vcpu)->cntvoff;
> +  

Re: [RFC PATCH v11 1/9] psci: export psci conduit get helper.

2020-04-21 Thread Jianyong Wu
Hi Mark,


On 2020/4/21, 5:41 PM, "Mark Rutland"  wrote:

On Tue, Apr 21, 2020 at 11:22:56AM +0800, Jianyong Wu wrote:
> Export arm_smccc_1_1_get_conduit then modules can use smccc helper which
> adopts it.
> 
> Signed-off-by: Jianyong Wu 

Nit: please say 'smccc conduit' in the commit title.

Ok, I will fix it next version.

Otherwise, I see not problem with this provided an in-tree module uses
this, so:

Acked-by: Mark Rutland 

Thanks! Glad to get this.

Best regards
Jianyong 

Mark.

> ---
>  drivers/firmware/psci/psci.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/firmware/psci/psci.c b/drivers/firmware/psci/psci.c
> index 2937d44b5df4..fd3c88f21b6a 100644
> --- a/drivers/firmware/psci/psci.c
> +++ b/drivers/firmware/psci/psci.c
> @@ -64,6 +64,7 @@ enum arm_smccc_conduit arm_smccc_1_1_get_conduit(void)
>  
>   return psci_ops.conduit;
>  }
> +EXPORT_SYMBOL(arm_smccc_1_1_get_conduit);
>  
>  typedef unsigned long (psci_fn)(unsigned long, unsigned long,
>   unsigned long, unsigned long);
> -- 
> 2.17.1
> 

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v2] kvm: Replace vcpu->swait with rcuwait

2020-04-21 Thread Paolo Bonzini
On 20/04/20 23:50, Davidlohr Bueso wrote:
> On Mon, 20 Apr 2020, Paolo Bonzini wrote:
> 
>> On 20/04/20 22:56, Davidlohr Bueso wrote:
>>> On Mon, 20 Apr 2020, Marc Zyngier wrote:
>>>
 This looks like a change in the semantics of the tracepoint. Before
 this
 change, 'waited' would have been true if the vcpu waited at all. Here,
 you'd
 have false if it has been interrupted by a signal, even if the vcpu
 has waited
 for a period of time.
>>>
>>> Hmm but sleeps are now uninterruptible as we're using TASK_IDLE.
>>
>> Hold on, does that mean that you can't anymore send a signal in order to
>> kick a thread out of KVM_RUN?  Or am I just misunderstanding?
> 
> Considering that the return value of the interruptible wait is not
> checked, I would not think this breaks KVM_RUN.

What return value?  kvm_vcpu_check_block checks signal_pending, so you
could have a case where the signal is injected but you're not woken up.

Admittedly I am not familiar with how TASK_* work under the hood, but it
does seem to be like that.

Paolo

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [RFC PATCH v11 1/9] psci: export psci conduit get helper.

2020-04-21 Thread Mark Rutland
On Tue, Apr 21, 2020 at 11:22:56AM +0800, Jianyong Wu wrote:
> Export arm_smccc_1_1_get_conduit then modules can use smccc helper which
> adopts it.
> 
> Signed-off-by: Jianyong Wu 

Nit: please say 'smccc conduit' in the commit title.

Otherwise, I see not problem with this provided an in-tree module uses
this, so:

Acked-by: Mark Rutland 

Mark.

> ---
>  drivers/firmware/psci/psci.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/firmware/psci/psci.c b/drivers/firmware/psci/psci.c
> index 2937d44b5df4..fd3c88f21b6a 100644
> --- a/drivers/firmware/psci/psci.c
> +++ b/drivers/firmware/psci/psci.c
> @@ -64,6 +64,7 @@ enum arm_smccc_conduit arm_smccc_1_1_get_conduit(void)
>  
>   return psci_ops.conduit;
>  }
> +EXPORT_SYMBOL(arm_smccc_1_1_get_conduit);
>  
>  typedef unsigned long (psci_fn)(unsigned long, unsigned long,
>   unsigned long, unsigned long);
> -- 
> 2.17.1
> 
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH] arm64/kvm: Fix duplicate tracepoint definitions after KVM consolidation

2020-04-21 Thread Will Deacon
Both kvm/{arm,handle_exit}.c include trace.h and attempt to instantiate
the same tracepoints, resulting in failures at link-time:

  | aarch64-linux-gnu-ld: arch/arm64/kvm/handle_exit.o:(__tracepoints+0x30):
  |   multiple definition of `__tracepoint_kvm_wfx_arm64';
  |   arch/arm64/kvm/arm.o:(__tracepoints+0x510): first defined here
  | ...

Split trace.h into two files so that the tracepoints are only created
in the C files that use them.

Cc: Marc Zyngier 
Signed-off-by: Will Deacon 
---

Applies against kvm-arm64/welcome-home. Probably worth just folding in
to the only commit on that branch.

 arch/arm64/kvm/arm.c   |   2 +-
 arch/arm64/kvm/handle_exit.c   |   2 +-
 arch/arm64/kvm/trace.h | 575 +
 arch/arm64/kvm/trace_arm.h | 378 +++
 arch/arm64/kvm/trace_handle_exit.h | 215 +++
 5 files changed, 599 insertions(+), 573 deletions(-)
 create mode 100644 arch/arm64/kvm/trace_arm.h
 create mode 100644 arch/arm64/kvm/trace_handle_exit.h

diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 48d0ec44ad77..c958bb37b769 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -22,7 +22,7 @@
 #include 
 
 #define CREATE_TRACE_POINTS
-#include "trace.h"
+#include "trace_arm.h"
 
 #include 
 #include 
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index aacfc55de44c..eb194696ef62 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -23,7 +23,7 @@
 #include 
 
 #define CREATE_TRACE_POINTS
-#include "trace.h"
+#include "trace_handle_exit.h"
 
 typedef int (*exit_handle_fn)(struct kvm_vcpu *, struct kvm_run *);
 
diff --git a/arch/arm64/kvm/trace.h b/arch/arm64/kvm/trace.h
index e83eafb2ba0c..86f9ea47be29 100644
--- a/arch/arm64/kvm/trace.h
+++ b/arch/arm64/kvm/trace.h
@@ -1,575 +1,8 @@
 /* SPDX-License-Identifier: GPL-2.0 */
-#if !defined(_TRACE_ARM64_KVM_H) || defined(TRACE_HEADER_MULTI_READ)
+#ifndef _TRACE_ARM64_KVM_H
 #define _TRACE_ARM64_KVM_H
 
-#include 
-#include 
-#include "sys_regs.h"
+#include "trace_arm.h"
+#include "trace_handle_exit.h"
 
-#undef TRACE_SYSTEM
-#define TRACE_SYSTEM kvm
-
-TRACE_EVENT(kvm_wfx_arm64,
-   TP_PROTO(unsigned long vcpu_pc, bool is_wfe),
-   TP_ARGS(vcpu_pc, is_wfe),
-
-   TP_STRUCT__entry(
-   __field(unsigned long,  vcpu_pc)
-   __field(bool,   is_wfe)
-   ),
-
-   TP_fast_assign(
-   __entry->vcpu_pc = vcpu_pc;
-   __entry->is_wfe  = is_wfe;
-   ),
-
-   TP_printk("guest executed wf%c at: 0x%08lx",
- __entry->is_wfe ? 'e' : 'i', __entry->vcpu_pc)
-);
-
-TRACE_EVENT(kvm_hvc_arm64,
-   TP_PROTO(unsigned long vcpu_pc, unsigned long r0, unsigned long imm),
-   TP_ARGS(vcpu_pc, r0, imm),
-
-   TP_STRUCT__entry(
-   __field(unsigned long, vcpu_pc)
-   __field(unsigned long, r0)
-   __field(unsigned long, imm)
-   ),
-
-   TP_fast_assign(
-   __entry->vcpu_pc = vcpu_pc;
-   __entry->r0 = r0;
-   __entry->imm = imm;
-   ),
-
-   TP_printk("HVC at 0x%08lx (r0: 0x%08lx, imm: 0x%lx)",
- __entry->vcpu_pc, __entry->r0, __entry->imm)
-);
-
-TRACE_EVENT(kvm_arm_setup_debug,
-   TP_PROTO(struct kvm_vcpu *vcpu, __u32 guest_debug),
-   TP_ARGS(vcpu, guest_debug),
-
-   TP_STRUCT__entry(
-   __field(struct kvm_vcpu *, vcpu)
-   __field(__u32, guest_debug)
-   ),
-
-   TP_fast_assign(
-   __entry->vcpu = vcpu;
-   __entry->guest_debug = guest_debug;
-   ),
-
-   TP_printk("vcpu: %p, flags: 0x%08x", __entry->vcpu, 
__entry->guest_debug)
-);
-
-TRACE_EVENT(kvm_arm_clear_debug,
-   TP_PROTO(__u32 guest_debug),
-   TP_ARGS(guest_debug),
-
-   TP_STRUCT__entry(
-   __field(__u32, guest_debug)
-   ),
-
-   TP_fast_assign(
-   __entry->guest_debug = guest_debug;
-   ),
-
-   TP_printk("flags: 0x%08x", __entry->guest_debug)
-);
-
-TRACE_EVENT(kvm_arm_set_dreg32,
-   TP_PROTO(const char *name, __u32 value),
-   TP_ARGS(name, value),
-
-   TP_STRUCT__entry(
-   __field(const char *, name)
-   __field(__u32, value)
-   ),
-
-   TP_fast_assign(
-   __entry->name = name;
-   __entry->value = value;
-   ),
-
-   TP_printk("%s: 0x%08x", __entry->name, __entry->value)
-);
-
-TRACE_DEFINE_SIZEOF(__u64);
-
-TRACE_EVENT(kvm_arm_set_regset,
-   TP_PROTO(const char *type, int len, __u64 *control, __u64 *value),
-   TP_ARGS(type, len, control, value),
-   TP_STRUCT__entry(
-   __field(const char *, name)
-   __field(int, len)
-   __array(u64, ctrls, 16)
-   __array(u64, values, 16)
-   ),
-   TP_fast_assign(
-   __entry->name = type;
-   __entry->len = len;