[PATCH v3 2/2] KVM: arm/arm64: Route vtimer events to user space
We have 2 modes for dealing with interrupts in the ARM world. We can either handle them all using hardware acceleration through the vgic or we can emulate a gic in user space and only drive CPU IRQ pins from there. Unfortunately, when driving IRQs from user space, we never tell user space about timer events that may result in interrupt line state changes, so we lose out on timer events if we run with user space gic emulation. This patch fixes that by routing vtimer expiration events to user space. With this patch I can successfully run edk2 and Linux with user space gic emulation. Signed-off-by: Alexander Graf --- v1 -> v2: - Add back curly brace that got lost v2 -> v3: - Split into patch set --- Documentation/virtual/kvm/api.txt | 24 +++- arch/arm/include/asm/kvm_host.h | 3 + arch/arm/kvm/arm.c| 22 --- arch/arm64/include/asm/kvm_host.h | 3 + include/uapi/linux/kvm.h | 14 + virt/kvm/arm/arch_timer.c | 125 +++--- 6 files changed, 149 insertions(+), 42 deletions(-) diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt index 23937e0..dec1a78 100644 --- a/Documentation/virtual/kvm/api.txt +++ b/Documentation/virtual/kvm/api.txt @@ -3202,8 +3202,10 @@ struct kvm_run { /* in */ __u8 request_interrupt_window; -Request that KVM_RUN return when it becomes possible to inject external +[x86] Request that KVM_RUN return when it becomes possible to inject external interrupts into the guest. Useful in conjunction with KVM_INTERRUPT. +[arm*] Bits set to 1 in here mask IRQ lines that would otherwise potentially +trigger forever. Useful with KVM_CAP_ARM_TIMER. __u8 padding1[7]; @@ -3519,6 +3521,16 @@ Hyper-V SynIC state change. Notification is used to remap SynIC event/message pages and to enable/disable SynIC messages/events processing in userspace. + /* KVM_EXIT_ARM_TIMER */ + struct { + __u8 timesource; + } arm_timer; + +Indicates that a timer triggered that user space needs to handle and +potentially mask with vcpu->run->request_interrupt_window to allow the +guest to proceed. This only happens for timers that got enabled through +KVM_CAP_ARM_TIMER. + /* Fix the size of the union. */ char padding[256]; }; @@ -3739,6 +3751,16 @@ Once this is done the KVM_REG_MIPS_VEC_* and KVM_REG_MIPS_MSA_* registers can be accessed, and the Config5.MSAEn bit is accessible via the KVM API and also from the guest. +6.11 KVM_CAP_ARM_TIMER + +Architectures: arm, arm64 +Target: vcpu +Parameters: args[0] contains a bitmap of timers to enable + +This capability allows to route per-core timers into user space. When it's +enabled, the enabled timers trigger KVM_EXIT_ARM_TIMER guest exits when they +are pending, unless masked by vcpu->run->request_interrupt_window. + 7. Capabilities that can be enabled on VMs -- diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index de338d9..77d1f73 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -180,6 +180,9 @@ struct kvm_vcpu_arch { /* Detect first run of a vcpu */ bool has_run_once; + + /* User space wants timer notifications */ + bool user_space_arm_timers; }; struct kvm_vm_stat { diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index c84b6ad..57bdb71 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -187,6 +187,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_ARM_PSCI_0_2: case KVM_CAP_READONLY_MEM: case KVM_CAP_MP_STATE: + case KVM_CAP_ARM_TIMER: r = 1; break; case KVM_CAP_COALESCED_MMIO: @@ -474,13 +475,7 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu) return ret; } - /* -* Enable the arch timers only if we have an in-kernel VGIC -* and it has been properly initialized, since we cannot handle -* interrupts from the virtual timer with a userspace gic. -*/ - if (irqchip_in_kernel(kvm) && vgic_initialized(kvm)) - ret = kvm_timer_enable(vcpu); + ret = kvm_timer_enable(vcpu); return ret; } @@ -601,6 +596,13 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) run->exit_reason = KVM_EXIT_INTR; } + if (kvm_check_request(KVM_REQ_PENDING_TIMER, vcpu)) { + /* Tell user space about the pending vtimer */ + ret = 0; + run->exit_reason = KVM_EXIT_ARM_TIMER; + run->arm_timer.timesource = KVM_ARM_TIMER_VTIMER; + } + if (ret <= 0 || need_new_vmid_gen(vcpu->kvm) ||
[PATCH v3 0/2] KVM: ARM: Enable vtimers with user space gic
Some systems out there (well, one type in particular - the Raspberry Pi series) do have virtualization capabilities in the core, but no ARM GIC interrupt controller. To run on these systems, the cleanest route is to just handle all interrupt delivery in user space and only deal with IRQ pins in the core side in KVM. This works pretty well already, but breaks when the guest starts to use architected timers, as these are handled straight inside kernel space today. This patch set allows user space to receive vtimer events as well as mask them, so that we can handle all vtimer related interrupt injection from user space, enabling us to use architected timer with user space gic emulation. I have successfully run edk2 as well as Linux using these patches on a Raspberry Pi 3 system with acceptable speed. A branch with WIP QEMU code can be found here: https://github.com/agraf/qemu.git no-kvm-irqchip To use the user space irqchip, just run it with $ qemu-system-aarch64 -M virt,kernel-irqchip=off v1 -> v2: - Add back curly brace that got lost v2 -> v3: - Fix "only only" in documentation - Split patches - Remove kvm_emulate.h include Alexander Graf (2): KVM: arm/arm64: Add vcpu ENABLE_CAP functionality KVM: arm/arm64: Route vtimer events to user space Documentation/virtual/kvm/api.txt | 28 - arch/arm/include/asm/kvm_host.h | 3 + arch/arm/kvm/arm.c| 47 +++--- arch/arm64/include/asm/kvm_host.h | 3 + include/uapi/linux/kvm.h | 14 + virt/kvm/arm/arch_timer.c | 125 +++--- 6 files changed, 177 insertions(+), 43 deletions(-) -- 2.6.6 ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
[PATCH v3 1/2] KVM: arm/arm64: Add vcpu ENABLE_CAP functionality
In a follow-up patch we will need to enable capabilities on demand for backwards compatibility. This patch adds the generic framework to handle vcpu cap enablement to the arm code base. Signed-off-by: Alexander Graf --- Documentation/virtual/kvm/api.txt | 4 +++- arch/arm/kvm/arm.c| 25 + 2 files changed, 28 insertions(+), 1 deletion(-) diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt index 739db9a..23937e0 100644 --- a/Documentation/virtual/kvm/api.txt +++ b/Documentation/virtual/kvm/api.txt @@ -997,7 +997,9 @@ documentation when it pops into existence). Capability: KVM_CAP_ENABLE_CAP, KVM_CAP_ENABLE_CAP_VM Architectures: x86 (only KVM_CAP_ENABLE_CAP_VM), - mips (only KVM_CAP_ENABLE_CAP), ppc, s390 + mips (only KVM_CAP_ENABLE_CAP), ppc, s390, + arm (only KVM_CAP_ENABLE_CAP), + arm64 (only KVM_CAP_ENABLE_CAP) Type: vcpu ioctl, vm ioctl (with KVM_CAP_ENABLE_CAP_VM) Parameters: struct kvm_enable_cap (in) Returns: 0 on success; -1 on error diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 75f130e..c84b6ad 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -878,6 +878,23 @@ static int kvm_arm_vcpu_has_attr(struct kvm_vcpu *vcpu, return ret; } +static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu, +struct kvm_enable_cap *cap) +{ + int r; + + if (cap->flags) + return -EINVAL; + + switch (cap->cap) { + default: + r = -EINVAL; + break; + } + + return r; +} + long kvm_arch_vcpu_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { @@ -941,6 +958,14 @@ long kvm_arch_vcpu_ioctl(struct file *filp, return -EFAULT; return kvm_arm_vcpu_has_attr(vcpu, &attr); } + case KVM_ENABLE_CAP: + { + struct kvm_enable_cap cap; + + if (copy_from_user(&cap, argp, sizeof(cap))) + return -EFAULT; + return kvm_vcpu_ioctl_enable_cap(vcpu, &cap); + } default: return -EINVAL; } -- 2.6.6 ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
[PATCH] KVM: arm/arm64: timer: Fix hw sync for user space irqchip path
While adding the new vgic implementation, apparently nobody tested the non-vgic path where user space controls the vgic, so two functions slipped through the cracks that get called in generic code but don't check whether hardware support is enabled. This patch guards them with proper checks to ensure we only try to use vgic data structures if they are available. Without this, I get a stack trace: [ 74.363037] Unable to handle kernel paging request at virtual address ffe8 [...] [ 74.929654] [] _raw_spin_lock+0x1c/0x58 [ 74.935133] [] kvm_vgic_flush_hwstate+0x88/0x288 [ 74.941406] [] kvm_arch_vcpu_ioctl_run+0xfc/0x630 [ 74.947766] [] kvm_vcpu_ioctl+0x2f4/0x710 [ 74.953420] [] do_vfs_ioctl+0xb0/0x728 [ 74.958807] [] SyS_ioctl+0x94/0xa8 [ 74.963844] [] el0_svc_naked+0x38/0x3c Fixes: 0919e84c0 Cc: sta...@vger.kernel.org Signed-off-by: Alexander Graf --- virt/kvm/arm/vgic/vgic.c | 6 ++ 1 file changed, 6 insertions(+) diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c index e83b7fe..9f312ba 100644 --- a/virt/kvm/arm/vgic/vgic.c +++ b/virt/kvm/arm/vgic/vgic.c @@ -645,6 +645,9 @@ next: /* Sync back the hardware VGIC state into our emulation after a guest's run. */ void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu) { + if (!vcpu->kvm->arch.vgic.enabled) + return; + vgic_process_maintenance_interrupt(vcpu); vgic_fold_lr_state(vcpu); vgic_prune_ap_list(vcpu); @@ -653,6 +656,9 @@ void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu) /* Flush our emulation state into the GIC hardware before entering the guest. */ void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu) { + if (!vcpu->kvm->arch.vgic.enabled) + return; + spin_lock(&vcpu->arch.vgic_cpu.ap_list_lock); vgic_flush_lr_state(vcpu); spin_unlock(&vcpu->arch.vgic_cpu.ap_list_lock); -- 2.6.6 ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
[PATCH v2] KVM: arm/arm64: Route vtimer events to user space
We have 2 modes for dealing with interrupts in the ARM world. We can either handle them all using hardware acceleration through the vgic or we can emulate a gic in user space and only drive CPU IRQ pins from there. Unfortunately, when driving IRQs from user space, we never tell user space about timer events that may result in interrupt line state changes, so we lose out on timer events if we run with user space gic emulation. This patch set fixes that by routing vtimer expiration events to user space. With this patch I can successfully run edk2 and Linux with user space gic emulation. Signed-off-by: Alexander Graf --- A branch with WIP QEMU code can be found here: https://github.com/agraf/qemu.git no-kvm-irqchip v1 -> v2: - Add back curly brace that got lost (and is very stubborn, sorry for the resubmit to actually add it back for real) --- Documentation/virtual/kvm/api.txt | 28 - arch/arm/include/asm/kvm_host.h | 3 + arch/arm/kvm/arm.c| 47 +++--- arch/arm64/include/asm/kvm_host.h | 3 + include/uapi/linux/kvm.h | 14 + virt/kvm/arm/arch_timer.c | 126 -- 6 files changed, 178 insertions(+), 43 deletions(-) diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt index 739db9a..6a64c53 100644 --- a/Documentation/virtual/kvm/api.txt +++ b/Documentation/virtual/kvm/api.txt @@ -997,7 +997,9 @@ documentation when it pops into existence). Capability: KVM_CAP_ENABLE_CAP, KVM_CAP_ENABLE_CAP_VM Architectures: x86 (only KVM_CAP_ENABLE_CAP_VM), - mips (only KVM_CAP_ENABLE_CAP), ppc, s390 + mips (only KVM_CAP_ENABLE_CAP), ppc, s390, + arm (only KVM_CAP_ENABLE_CAP), + arm64 (only only KVM_CAP_ENABLE_CAP) Type: vcpu ioctl, vm ioctl (with KVM_CAP_ENABLE_CAP_VM) Parameters: struct kvm_enable_cap (in) Returns: 0 on success; -1 on error @@ -3200,8 +3202,10 @@ struct kvm_run { /* in */ __u8 request_interrupt_window; -Request that KVM_RUN return when it becomes possible to inject external +[x86] Request that KVM_RUN return when it becomes possible to inject external interrupts into the guest. Useful in conjunction with KVM_INTERRUPT. +[arm*] Bits set to 1 in here mask IRQ lines that would otherwise potentially +trigger forever. Useful with KVM_CAP_ARM_TIMER. __u8 padding1[7]; @@ -3517,6 +3521,16 @@ Hyper-V SynIC state change. Notification is used to remap SynIC event/message pages and to enable/disable SynIC messages/events processing in userspace. + /* KVM_EXIT_ARM_TIMER */ + struct { + __u8 timesource; + } arm_timer; + +Indicates that a timer triggered that user space needs to handle and +potentially mask with vcpu->run->request_interrupt_window to allow the +guest to proceed. This only happens for timers that got enabled through +KVM_CAP_ARM_TIMER. + /* Fix the size of the union. */ char padding[256]; }; @@ -3737,6 +3751,16 @@ Once this is done the KVM_REG_MIPS_VEC_* and KVM_REG_MIPS_MSA_* registers can be accessed, and the Config5.MSAEn bit is accessible via the KVM API and also from the guest. +6.11 KVM_CAP_ARM_TIMER + +Architectures: arm, arm64 +Target: vcpu +Parameters: args[0] contains a bitmap of timers to enable + +This capability allows to route per-core timers into user space. When it's +enabled, the enabled timers trigger KVM_EXIT_ARM_TIMER guest exits when they +are pending, unless masked by vcpu->run->request_interrupt_window. + 7. Capabilities that can be enabled on VMs -- diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index de338d9..77d1f73 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -180,6 +180,9 @@ struct kvm_vcpu_arch { /* Detect first run of a vcpu */ bool has_run_once; + + /* User space wants timer notifications */ + bool user_space_arm_timers; }; struct kvm_vm_stat { diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 75f130e..57bdb71 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -187,6 +187,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_ARM_PSCI_0_2: case KVM_CAP_READONLY_MEM: case KVM_CAP_MP_STATE: + case KVM_CAP_ARM_TIMER: r = 1; break; case KVM_CAP_COALESCED_MMIO: @@ -474,13 +475,7 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu) return ret; } - /* -* Enable the arch timers only if we have an in-kernel VGIC -* and it has been properly initialized, since we cannot handle -* interrupts from the virtual timer with a userspace gic. -*/ - if (irqchip_in_kernel(kvm) && vgic_initialized(kvm)) -
[PATCH v2] KVM: arm/arm64: Route vtimer events to user space
We have 2 modes for dealing with interrupts in the ARM world. We can either handle them all using hardware acceleration through the vgic or we can emulate a gic in user space and only drive CPU IRQ pins from there. Unfortunately, when driving IRQs from user space, we never tell user space about timer events that may result in interrupt line state changes, so we lose out on timer events if we run with user space gic emulation. This patch set fixes that by routing vtimer expiration events to user space. With this patch I can successfully run edk2 and Linux with user space gic emulation. Signed-off-by: Alexander Graf --- A branch with WIP QEMU code can be found here: https://github.com/agraf/qemu.git no-kvm-irqchip v1 -> v2: - Add back curly brace that got lost --- Documentation/virtual/kvm/api.txt | 28 - arch/arm/include/asm/kvm_host.h | 3 + arch/arm/kvm/arm.c| 46 +++--- arch/arm64/include/asm/kvm_host.h | 3 + include/uapi/linux/kvm.h | 14 + virt/kvm/arm/arch_timer.c | 126 -- 6 files changed, 177 insertions(+), 43 deletions(-) diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt index 739db9a..6a64c53 100644 --- a/Documentation/virtual/kvm/api.txt +++ b/Documentation/virtual/kvm/api.txt @@ -997,7 +997,9 @@ documentation when it pops into existence). Capability: KVM_CAP_ENABLE_CAP, KVM_CAP_ENABLE_CAP_VM Architectures: x86 (only KVM_CAP_ENABLE_CAP_VM), - mips (only KVM_CAP_ENABLE_CAP), ppc, s390 + mips (only KVM_CAP_ENABLE_CAP), ppc, s390, + arm (only KVM_CAP_ENABLE_CAP), + arm64 (only only KVM_CAP_ENABLE_CAP) Type: vcpu ioctl, vm ioctl (with KVM_CAP_ENABLE_CAP_VM) Parameters: struct kvm_enable_cap (in) Returns: 0 on success; -1 on error @@ -3200,8 +3202,10 @@ struct kvm_run { /* in */ __u8 request_interrupt_window; -Request that KVM_RUN return when it becomes possible to inject external +[x86] Request that KVM_RUN return when it becomes possible to inject external interrupts into the guest. Useful in conjunction with KVM_INTERRUPT. +[arm*] Bits set to 1 in here mask IRQ lines that would otherwise potentially +trigger forever. Useful with KVM_CAP_ARM_TIMER. __u8 padding1[7]; @@ -3517,6 +3521,16 @@ Hyper-V SynIC state change. Notification is used to remap SynIC event/message pages and to enable/disable SynIC messages/events processing in userspace. + /* KVM_EXIT_ARM_TIMER */ + struct { + __u8 timesource; + } arm_timer; + +Indicates that a timer triggered that user space needs to handle and +potentially mask with vcpu->run->request_interrupt_window to allow the +guest to proceed. This only happens for timers that got enabled through +KVM_CAP_ARM_TIMER. + /* Fix the size of the union. */ char padding[256]; }; @@ -3737,6 +3751,16 @@ Once this is done the KVM_REG_MIPS_VEC_* and KVM_REG_MIPS_MSA_* registers can be accessed, and the Config5.MSAEn bit is accessible via the KVM API and also from the guest. +6.11 KVM_CAP_ARM_TIMER + +Architectures: arm, arm64 +Target: vcpu +Parameters: args[0] contains a bitmap of timers to enable + +This capability allows to route per-core timers into user space. When it's +enabled, the enabled timers trigger KVM_EXIT_ARM_TIMER guest exits when they +are pending, unless masked by vcpu->run->request_interrupt_window. + 7. Capabilities that can be enabled on VMs -- diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index de338d9..77d1f73 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -180,6 +180,9 @@ struct kvm_vcpu_arch { /* Detect first run of a vcpu */ bool has_run_once; + + /* User space wants timer notifications */ + bool user_space_arm_timers; }; struct kvm_vm_stat { diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 75f130e..1b4b9a6 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -187,6 +187,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_ARM_PSCI_0_2: case KVM_CAP_READONLY_MEM: case KVM_CAP_MP_STATE: + case KVM_CAP_ARM_TIMER: r = 1; break; case KVM_CAP_COALESCED_MMIO: @@ -474,13 +475,7 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu) return ret; } - /* -* Enable the arch timers only if we have an in-kernel VGIC -* and it has been properly initialized, since we cannot handle -* interrupts from the virtual timer with a userspace gic. -*/ - if (irqchip_in_kernel(kvm) && vgic_initialized(kvm)) - ret = kvm_timer_enable(vcpu); + ret = kvm_timer_enable(vcpu); r
[PATCH] KVM: arm/arm64: Route vtimer events to user space
We have 2 modes for dealing with interrupts in the ARM world. We can either handle them all using hardware acceleration through the vgic or we can emulate a gic in user space and only drive CPU IRQ pins from there. Unfortunately, when driving IRQs from user space, we never tell user space about timer events that may result in interrupt line state changes, so we lose out on timer events if we run with user space gic emulation. This patch set fixes that by routing vtimer expiration events to user space. With this patch I can successfully run edk2 and Linux with user space gic emulation. Signed-off-by: Alexander Graf --- A branch with WIP QEMU code can be found here: https://github.com/agraf/qemu.git no-kvm-irqchip --- Documentation/virtual/kvm/api.txt | 28 - arch/arm/include/asm/kvm_host.h | 3 + arch/arm/kvm/arm.c| 46 +++--- arch/arm64/include/asm/kvm_host.h | 3 + include/uapi/linux/kvm.h | 14 + virt/kvm/arm/arch_timer.c | 126 -- 6 files changed, 177 insertions(+), 43 deletions(-) diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt index 739db9a..6a64c53 100644 --- a/Documentation/virtual/kvm/api.txt +++ b/Documentation/virtual/kvm/api.txt @@ -997,7 +997,9 @@ documentation when it pops into existence). Capability: KVM_CAP_ENABLE_CAP, KVM_CAP_ENABLE_CAP_VM Architectures: x86 (only KVM_CAP_ENABLE_CAP_VM), - mips (only KVM_CAP_ENABLE_CAP), ppc, s390 + mips (only KVM_CAP_ENABLE_CAP), ppc, s390, + arm (only KVM_CAP_ENABLE_CAP), + arm64 (only only KVM_CAP_ENABLE_CAP) Type: vcpu ioctl, vm ioctl (with KVM_CAP_ENABLE_CAP_VM) Parameters: struct kvm_enable_cap (in) Returns: 0 on success; -1 on error @@ -3200,8 +3202,10 @@ struct kvm_run { /* in */ __u8 request_interrupt_window; -Request that KVM_RUN return when it becomes possible to inject external +[x86] Request that KVM_RUN return when it becomes possible to inject external interrupts into the guest. Useful in conjunction with KVM_INTERRUPT. +[arm*] Bits set to 1 in here mask IRQ lines that would otherwise potentially +trigger forever. Useful with KVM_CAP_ARM_TIMER. __u8 padding1[7]; @@ -3517,6 +3521,16 @@ Hyper-V SynIC state change. Notification is used to remap SynIC event/message pages and to enable/disable SynIC messages/events processing in userspace. + /* KVM_EXIT_ARM_TIMER */ + struct { + __u8 timesource; + } arm_timer; + +Indicates that a timer triggered that user space needs to handle and +potentially mask with vcpu->run->request_interrupt_window to allow the +guest to proceed. This only happens for timers that got enabled through +KVM_CAP_ARM_TIMER. + /* Fix the size of the union. */ char padding[256]; }; @@ -3737,6 +3751,16 @@ Once this is done the KVM_REG_MIPS_VEC_* and KVM_REG_MIPS_MSA_* registers can be accessed, and the Config5.MSAEn bit is accessible via the KVM API and also from the guest. +6.11 KVM_CAP_ARM_TIMER + +Architectures: arm, arm64 +Target: vcpu +Parameters: args[0] contains a bitmap of timers to enable + +This capability allows to route per-core timers into user space. When it's +enabled, the enabled timers trigger KVM_EXIT_ARM_TIMER guest exits when they +are pending, unless masked by vcpu->run->request_interrupt_window. + 7. Capabilities that can be enabled on VMs -- diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index de338d9..77d1f73 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -180,6 +180,9 @@ struct kvm_vcpu_arch { /* Detect first run of a vcpu */ bool has_run_once; + + /* User space wants timer notifications */ + bool user_space_arm_timers; }; struct kvm_vm_stat { diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 75f130e..1b4b9a6 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -187,6 +187,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_ARM_PSCI_0_2: case KVM_CAP_READONLY_MEM: case KVM_CAP_MP_STATE: + case KVM_CAP_ARM_TIMER: r = 1; break; case KVM_CAP_COALESCED_MMIO: @@ -474,13 +475,7 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu) return ret; } - /* -* Enable the arch timers only if we have an in-kernel VGIC -* and it has been properly initialized, since we cannot handle -* interrupts from the virtual timer with a userspace gic. -*/ - if (irqchip_in_kernel(kvm) && vgic_initialized(kvm)) - ret = kvm_timer_enable(vcpu); + ret = kvm_timer_enable(vcpu); return ret; } @@ -601,6 +596,13 @@ int kvm_arch_vcp
Re: [PATCH v4 00/10] ARM: KVM: Support for vgic-v3
Hi Christoffer, On 15/09/16 10:13, Christoffer Dall wrote: > Hi Valdimir, > > On Mon, Sep 12, 2016 at 03:49:14PM +0100, Vladimir Murzin wrote: >> Hi, >> >> This is an attempt to make use vgic-v3 under arch/arm since >> save-restore functionality got re-written in C and can be shared >> between arm/arm64 like it has already been done for vgic-v2 and timer. >> >> With this patches I'm able to get 32 core an AArch32 ARMv8 guest boot: >> >> ... >> GICv3: CPU31: found redistributor 703 region 0:0x3ffd >> CPU31: thread -1, cpu 3, socket 7, mpidr 8703 >> Brought up 32 CPUs >> SMP: Total of 32 processors activated (768.00 BogoMIPS). >> CPU: All CPU(s) started in SVC mode. >> ... >> >> Additionally, quite lightweight test based on Self IPI guest test[1] >> has been run with up to 255 cpus. >> > I have applied this to kvmarm/queue, fixing up a few trivial conflicts, > and I have changed the kvm_info message. Great! > > If you could test the integrated branch with GICv3 on a 32-bit platform, > that would be great. I've just pulled kvmarm/queue and started testing. > > I'll give people a few days to give their acks to the non-KVM part of > the series and will then put it in next. > Thanks! Vladimir > Thanks for the work, > -Christoffer > > ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
Re: [PATCH v4 00/10] ARM: KVM: Support for vgic-v3
Hi Valdimir, On Mon, Sep 12, 2016 at 03:49:14PM +0100, Vladimir Murzin wrote: > Hi, > > This is an attempt to make use vgic-v3 under arch/arm since > save-restore functionality got re-written in C and can be shared > between arm/arm64 like it has already been done for vgic-v2 and timer. > > With this patches I'm able to get 32 core an AArch32 ARMv8 guest boot: > > ... > GICv3: CPU31: found redistributor 703 region 0:0x3ffd > CPU31: thread -1, cpu 3, socket 7, mpidr 8703 > Brought up 32 CPUs > SMP: Total of 32 processors activated (768.00 BogoMIPS). > CPU: All CPU(s) started in SVC mode. > ... > > Additionally, quite lightweight test based on Self IPI guest test[1] > has been run with up to 255 cpus. > I have applied this to kvmarm/queue, fixing up a few trivial conflicts, and I have changed the kvm_info message. If you could test the integrated branch with GICv3 on a 32-bit platform, that would be great. I'll give people a few days to give their acks to the non-KVM part of the series and will then put it in next. Thanks for the work, -Christoffer ___ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
Re: [PATCH v4 01/10] arm64: KVM: Use static keys for selecting the GIC backend
On Wed, Sep 14, 2016 at 04:20:00PM +0100, Vladimir Murzin wrote: > On 13/09/16 10:22, Christoffer Dall wrote: > > On Tue, Sep 13, 2016 at 10:11:10AM +0100, Marc Zyngier wrote: > >> On 13/09/16 09:20, Christoffer Dall wrote: > >>> On Mon, Sep 12, 2016 at 03:49:15PM +0100, Vladimir Murzin wrote: > Currently GIC backend is selected via alternative framework and this > is fine. We are going to introduce vgic-v3 to 32-bit world and there > we don't have patching framework in hand, so we can either check > support for GICv3 every time we need to choose which backend to use or > try to optimise it by using static keys. The later looks quite > promising because we can share logic involved in selecting GIC backend > between architectures if both uses static keys. > > This patch moves arm64 from alternative to static keys framework for > selecting GIC backend. For that we embed static key into vgic_global > and enable the key during vgic initialisation based on what has > already been exposed by the host GIC driver. > > Signed-off-by: Vladimir Murzin > --- > arch/arm64/kvm/hyp/switch.c | 21 +++-- > include/kvm/arm_vgic.h|4 > virt/kvm/arm/vgic/vgic-init.c |4 > virt/kvm/arm/vgic/vgic.c |2 +- > 4 files changed, 20 insertions(+), 11 deletions(-) > > diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c > index 5a84b45..d5c4cc5 100644 > --- a/arch/arm64/kvm/hyp/switch.c > +++ b/arch/arm64/kvm/hyp/switch.c > @@ -16,6 +16,8 @@ > */ > > #include > +#include > + > #include > #include > > @@ -126,17 +128,13 @@ static void __hyp_text __deactivate_vm(struct > kvm_vcpu *vcpu) > write_sysreg(0, vttbr_el2); > } > > -static hyp_alternate_select(__vgic_call_save_state, > -__vgic_v2_save_state, __vgic_v3_save_state, > -ARM64_HAS_SYSREG_GIC_CPUIF); > - > -static hyp_alternate_select(__vgic_call_restore_state, > -__vgic_v2_restore_state, > __vgic_v3_restore_state, > -ARM64_HAS_SYSREG_GIC_CPUIF); > - > static void __hyp_text __vgic_save_state(struct kvm_vcpu *vcpu) > { > -__vgic_call_save_state()(vcpu); > +if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) > >>> > >>> It's a bit weird that we use _unlikely for GICv3 (at least if/when GICv3 > >>> hardware becomes mainstream), but as we don't have another primitive for > >>> the 'default disabled' case, I suppose that's the best we can do. > >> > >> We could always revert the "likelihood" of that test once GICv3 has > >> conquered the world. Or start patching the 32bit kernel like we do for > >> 64bit... > >> > >>> > +__vgic_v3_save_state(vcpu); > +else > +__vgic_v2_save_state(vcpu); > + > write_sysreg(read_sysreg(hcr_el2) & ~HCR_INT_OVERRIDE, hcr_el2); > } > > @@ -149,7 +147,10 @@ static void __hyp_text __vgic_restore_state(struct > kvm_vcpu *vcpu) > val |= vcpu->arch.irq_lines; > write_sysreg(val, hcr_el2); > > -__vgic_call_restore_state()(vcpu); > +if (static_branch_unlikely(&kvm_vgic_global_state.gicv3_cpuif)) > +__vgic_v3_restore_state(vcpu); > +else > +__vgic_v2_restore_state(vcpu); > } > > static bool __hyp_text __true_value(void) > diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h > index 19b698e..994665a 100644 > --- a/include/kvm/arm_vgic.h > +++ b/include/kvm/arm_vgic.h > @@ -23,6 +23,7 @@ > #include > #include > #include > +#include > > #define VGIC_V3_MAX_CPUS255 > #define VGIC_V2_MAX_CPUS8 > @@ -63,6 +64,9 @@ struct vgic_global { > > /* Only needed for the legacy KVM_CREATE_IRQCHIP */ > boolcan_emulate_gicv2; > + > +/* GIC system register CPU interface */ > +struct static_key_false gicv3_cpuif; > >>> > >>> Documentation/static-keys.txt says that we are not supposed to use > >>> struct static_key_false directly. This will obviously work quite > >>> nicely, but we could consider adding a pair of > >>> DECLARE_STATIC_KEY_TRUE/FALSE macros that don't have the assignments, > >>> but obviously this will need an ack from other maintainers. > >>> > >>> Thoughts? > >> > >> Grepping through the tree shows that we're not the only abusers of this > >> (dynamic debug is far worse!). Happy to write the additional macros and > >> submit them if nobody beats me to it. > >> > >>