On 2020-12-09 06:09, Jianyong Wu wrote:
ptp_kvm will get this service through SMCC call.
The service offers wall time and cycle count of host to guest.
The caller must specify whether they want the host cycle count
or the difference between host cycle count and cntvoff.
Signed-off-by: Jianyong Wu
---
arch/arm64/kvm/hypercalls.c | 59 +
include/linux/arm-smccc.h | 16 ++
2 files changed, 75 insertions(+)
diff --git a/arch/arm64/kvm/hypercalls.c b/arch/arm64/kvm/hypercalls.c
index b9d8607083eb..9a4834502388 100644
--- a/arch/arm64/kvm/hypercalls.c
+++ b/arch/arm64/kvm/hypercalls.c
@@ -9,6 +9,49 @@
#include
#include
+static void kvm_ptp_get_time(struct kvm_vcpu *vcpu, u64 *val)
+{
+ struct system_time_snapshot systime_snapshot;
+ u64 cycles = ~0UL;
+ u32 feature;
+
+ /*
+* system time and counter value must captured in the same
+* time to keep consistency and precision.
+*/
+ ktime_get_snapshot(_snapshot);
+
+ // binding ptp_kvm clocksource to arm_arch_counter
+ if (systime_snapshot.cs_id != CSID_ARM_ARCH_COUNTER)
+ return;
+
+ val[0] = upper_32_bits(systime_snapshot.real);
+ val[1] = lower_32_bits(systime_snapshot.real);
+
+ /*
+* which of virtual counter or physical counter being
+* asked for is decided by the r1 value of SMCCC
+* call. If no invalid r1 value offered, default cycle
+* value(-1) will be returned.
+* Note: keep in mind that feature is u32 and smccc_get_arg1
+* will return u64, so need auto cast here.
This feels like a useless comment. On the other hand, you don't
document why we don't need to worry about -1 being used as the
top 32bits of the real time.
+*/
+ feature = smccc_get_arg1(vcpu);
+ switch (feature) {
+ case ARM_PTP_VIRT_COUNTER:
This #define should at least contain "KVM", because this isn't
something that is architectural.
+ cycles = systime_snapshot.cycles - vcpu_read_sys_reg(vcpu,
CNTVOFF_EL2);
+ break;
+ case ARM_PTP_PHY_COUNTER:
s/PHY/PHYS/
+ cycles = systime_snapshot.cycles;
+ break;
+ default:
+ val[0] = SMCCC_RET_NOT_SUPPORTED;
+ break;
+ }
+ val[2] = upper_32_bits(cycles);
+ val[3] = lower_32_bits(cycles);
The control flow is really odd. You start by writing values to
the return array, and then have to overwrite it if you fail.
I came up with the following instead (including comments):
@@ -16,38 +16,44 @@ static void kvm_ptp_get_time(struct kvm_vcpu *vcpu,
u64 *val)
u32 feature;
/*
-* system time and counter value must captured in the same
+* system time and counter value must captured at the same
* time to keep consistency and precision.
*/
ktime_get_snapshot(_snapshot);
- // binding ptp_kvm clocksource to arm_arch_counter
+ /*
+* This is only valid if the current clocksource is the
+* architected counter, as this is the only one the guest
+* can see.
+*/
if (systime_snapshot.cs_id != CSID_ARM_ARCH_COUNTER)
return;
- val[0] = upper_32_bits(systime_snapshot.real);
- val[1] = lower_32_bits(systime_snapshot.real);
-
/*
-* which of virtual counter or physical counter being
-* asked for is decided by the r1 value of SMCCC
-* call. If no invalid r1 value offered, default cycle
-* value(-1) will be returned.
-* Note: keep in mind that feature is u32 and smccc_get_arg1
-* will return u64, so need auto cast here.
+* The guest selects one of the two reference counters
+* (virtual or physical) with the first argument of the SMCCC
+* call. In case the identifier is not supported, error out.
*/
feature = smccc_get_arg1(vcpu);
switch (feature) {
- case ARM_PTP_VIRT_COUNTER:
+ case KVM_PTP_VIRT_COUNTER:
cycles = systime_snapshot.cycles - vcpu_read_sys_reg(vcpu,
CNTVOFF_EL2);
break;
- case ARM_PTP_PHY_COUNTER:
+ case KVM_PTP_PHYS_COUNTER:
cycles = systime_snapshot.cycles;
break;
default:
- val[0] = SMCCC_RET_NOT_SUPPORTED;
- break;
+ return;
}
+
+ /*
+* This relies on the top bit of val[0] never being set for
+* valid values of system time, because that is *really* far
+* in the future (about 292 years from 1970, and at that stage
+* nobody will give a damn about it).
+*/
+ val[0] = upper_32_bits(systime_snapshot.real);
+ val[1] = lower_32_bits(systime_snapshot.real);
val[2] = upper_32_bits(cycles);
val[3] = lower_32_bits(cycles);
}
+}
+
int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
{