-by: Raphael Gault
Signed-off-by: Julien Thierry
Cc: Marc Zyngier
Cc: James Morse
Cc: Suzuki K Poulose
Cc: kvmarm@lists.cs.columbia.edu
---
arch/arm64/kvm/hyp/entry.S | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index e5cc8d66bf53
From: Raphael Gault
Annotate assembler functions which are callable but do not
setup a correct stack frame.
Signed-off-by: Raphael Gault
Signed-off-by: Julien Thierry
Cc: Marc Zyngier
Cc: James Morse
Cc: Suzuki K Poulose
Cc: kvmarm@lists.cs.columbia.edu
---
arch/arm64/kernel/hyp-stub.S
("KVM: arm/arm64: Support chained PMU counters")
> Suggested-by: Andrew Murray
> Suggested-by: Julien Thierry
> Cc: Marc Zyngier
> Signed-off-by: Zenghui Yu
> ---
>
> Changes since v1:
> - Introduce kvm_pmu_vcpu_init() in vcpu's creation time, move the
>a
such function at the time and I'm unsure whether this
warrants creating that separate function (I would still suggest creating
it to make things clearer).
> + kvm_pmu_stop_counter(vcpu, >pmc[i]);
Whatever other opinions are on splitting pmu_vcpu_init/reset, that
change m
My @arm.com address will stop working in a couple of weeks. Update
MAINTAINERS and .mailmap files with an address I'll have access to.
Signed-off-by: Julien Thierry
---
.mailmap| 1 +
MAINTAINERS | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/.mailmap b/.mailmap
index
When using an NMI for the PMU interrupt, taking any lock might cause a
deadlock. The current PMU overflow handler in KVM takes locks when
trying to wake up a vcpu.
When overflow handler is called by an NMI, defer the vcpu waking in an
irq_work queue.
Signed-off-by: Julien Thierry
Cc
When using an NMI for the PMU interrupt, taking any lock migh cause a
deadlock. The current PMU overflow handler in KVM takes takes locks when
trying to wake up a vcpu.
When overflow handler is called by an NMI, defer the vcpu waking in an
irq_work queue.
Signed-off-by: Julien Thierry
Cc
On 04/07/2019 10:01, Andre Przywara wrote:
> On Thu, 4 Jul 2019 08:38:20 +0100
> Julien Thierry wrote:
>
>> On 21/06/2019 10:38, Marc Zyngier wrote:
>>> From: Andre Przywara
>>>
>>> The VGIC maintenance IRQ signals various conditions about the LRs, w
gt; + bool state;
>
> /*
>* If we exit a nested VM with a pending maintenance interrupt from the
> @@ -202,8 +215,12 @@ void vgic_v3_handle_nested_maint_irq(struct kvm_vcpu
> *vcpu)
>* can re-sync the appropriate LRs and sample level triggered interrupts
>* a
gt; +
> + dev->kvm->arch.vgic.maint_irq = val;
> +
> + return 0;
> + }
> case KVM_DEV_ARM_VGIC_GRP_CTRL: {
> int ret;
>
> @@ -712,6 +733,7 @@ static int vgic_v3_has_attr(struct kvm_device *dev,
> case KVM_DEV_ARM_VGIC_GRP_C
On 03/07/2019 13:15, Marc Zyngier wrote:
> On 24/06/2019 13:42, Julien Thierry wrote:
>>
>>
>> On 06/21/2019 10:37 AM, Marc Zyngier wrote:
>>> From: Andre Przywara
>>>
>>> KVM internally uses accessor functions when reading or writing t
+ SYS_INSN_TO_DESC(TLBI_VMALLS12E1IS, handle_vmalls12e1is,
> forward_nv_traps),
> + SYS_INSN_TO_DESC(TLBI_IPAS2E1, handle_ipas2e1is, forward_nv_traps),
> + SYS_INSN_TO_DESC(TLBI_IPAS2LE1, handle_ipas2e1is, forward_nv_traps),
> + SYS_INSN_TO_DESC(TLBI_ALLE2, handle_alle2is, forward_nv_traps),
> + SYS_INSN_TO_DESC(TLBI_VAE2, handle_vae2, forward_nv_traps),
> + SYS_INSN_TO_DESC(TLBI_ALLE1, handle_alle1is, forward_nv_traps),
> + SYS_INSN_TO_DESC(TLBI_VALE2, handle_vae2, forward_nv_traps),
> + SYS_INSN_TO_DESC(TLBI_VMALLS12E1, handle_vmalls12e1is,
> forward_nv_traps),
> };
>
> static bool trap_dbgidr(struct kvm_vcpu *vcpu,
> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> index 6a7cba077bce..0ea79e543b29 100644
> --- a/virt/kvm/arm/mmu.c
> +++ b/virt/kvm/arm/mmu.c
> @@ -51,7 +51,23 @@ static bool memslot_is_logging(struct kvm_memory_slot
> *memslot)
> */
> void kvm_flush_remote_tlbs(struct kvm *kvm)
> {
> - kvm_call_hyp(__kvm_tlb_flush_vmid, >arch.mmu);
> + struct kvm_s2_mmu *mmu = >arch.mmu;
> +
> + if (mmu == >arch.mmu) {
> + /*
> + * For a normal (i.e. non-nested) guest, flush entries for the
> + * given VMID *
> + */
> + kvm_call_hyp(__kvm_tlb_flush_vmid, mmu);
> + } else {
> + /*
> + * When supporting nested virtualization, we can have multiple
> + * VMIDs in play for each VCPU in the VM, so it's really not
> + * worth it to try to quiesce the system and flush all the
> + * VMIDs that may be in use, instead just nuke the whole thing.
> + */
> + kvm_call_hyp(__kvm_flush_vm_context);
> + }
> }
>
> static void kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t ipa)
>
Cheers,
--
Julien Thierry
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
this
one to vtcr_el2.
Actually, things still seem to run with that. It looks like that
save/restore might not be completely required.
This seems to only get called in the context of handle_exit(). At that
point I think we don't need to save the *_el2 registers. vttbr_el2 and
vtcr_el2 both get s
pgd_phys;
> mmu->vmid.vmid_gen = 0;
>
> + for_each_possible_cpu(cpu)
> + *per_cpu_ptr(mmu->last_vcpu_ran, cpu) = -1;
Nit: I'd suggest putting that right after the allocation of last_vcpu_ran.
> +
> kvm_init_s2_mmu(mmu);
Hmm, now we have kvm_init_stage2_mmu() and an arch (arm or arm64)
specific kvm_init_s2_mmu()...
If we want to keep the s2 mmu structure different for arm and arm64, I'd
suggest at least renaming kvm_init_s2_mmu() so the distinction with
kvm_init_stage2_mmu() is clearer.
>
> return 0;
> @@ -1021,8 +1033,10 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu)
> spin_unlock(>mmu_lock);
>
> /* Free the HW pgd, one page at a time */
> - if (pgd)
> + if (pgd) {
> free_pages_exact(pgd, stage2_pgd_size(kvm));
> + free_percpu(mmu->last_vcpu_ran);
> + }
> }
>
> static pud_t *stage2_get_pud(struct kvm_s2_mmu *mmu, struct
> kvm_mmu_memory_cache *cache,
>
Cheers,
--
Julien Thierry
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
t; +++ b/virt/kvm/arm/arm.c
> @@ -1005,8 +1005,10 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct
> kvm_vcpu *vcpu,
>* Ensure a rebooted VM will fault in RAM pages and detect if the
>* guest MMU is turned off and flush the caches as needed.
>*/
> - if (vcpu->arch.has_
pointer.
(I feel we should be taking kvm->mmu_lock in kvm_vcpu_init_nested() )
> + vcpu->arch.hw_mmu = get_s2_mmu_nested(vcpu);
> + spin_unlock(>kvm->mmu_lock);
> + }
> +}
> +
> +void kvm_vcpu_put_hw_mmu(struct kvm_vcpu *vcpu)
> +{
> + if (vcpu->arch.hw_mmu != >kvm->arch.mmu) {
> + atomic_dec(>arch.hw_mmu->refcnt);
> + vcpu->arch.hw_mmu = NULL;
> + }
> +}
>
> /*
> * Inject wfx to the virtual EL2 if this is not from the virtual EL2 and
> @@ -37,3 +191,21 @@ int handle_wfx_nested(struct kvm_vcpu *vcpu, bool is_wfe)
>
> return -EINVAL;
> }
> +
> +void kvm_arch_flush_shadow_all(struct kvm *kvm)
> +{
> + int i;
> +
> + for (i = 0; i < kvm->arch.nested_mmus_size; i++) {
> + struct kvm_s2_mmu *mmu = >arch.nested_mmus[i];
> +
> + WARN_ON(atomic_read(>refcnt));
> +
> + if (!atomic_read(>refcnt))
> + kvm_free_stage2_pgd(mmu);
> + }
> + kfree(kvm->arch.nested_mmus);
> + kvm->arch.nested_mmus = NULL;
> + kvm->arch.nested_mmus_size = 0;
Don't we need also to take the lock before modifying those? (Apprently
we're killing the VM, so there shouldn't be other user, but just want to
make sure...)
Cheers,
--
Julien Thierry
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
On 06/21/2019 10:38 AM, Marc Zyngier wrote:
> From: Jintack Lim
>
> Forward ELR_EL1, SPSR_EL1 and VBAR_EL1 traps to the virtual EL2 if the
> virtual HCR_EL2.NV bit is set.
>
> This is for recursive nested virtualization.
>
> Signed-off-by: Jintack Lim
> Signed-off-by: Marc Zyngier
> ---
>
On 06/21/2019 10:38 AM, Marc Zyngier wrote:
> From: Jintack Lim
>
> Forward the EL1 virtual memory register traps to the virtual EL2 if they
> are not coming from the virtual EL2 and the virtual HCR_EL2.TVM or TRVM
> bit is set.
>
> This is for recursive nested virtualization.
>
>
On 06/21/2019 10:38 AM, Marc Zyngier wrote:
> From: Jintack Lim
>
> Forward traps due to HCR_EL2.NV bit to the virtual EL2 if they are not
> coming from the virtual EL2 and the virtual HCR_EL2.NV bit is set.
>
> In addition to EL2 register accesses, setting NV bit will also make EL12
>
On 06/21/2019 10:38 AM, Marc Zyngier wrote:
> From: Jintack Lim
>
> Forward exceptions due to WFI or WFE instructions to the virtual EL2 if
> they are not coming from the virtual EL2 and virtual HCR_EL2.TWX is set.
>
> Signed-off-by: Jintack Lim
> Signed-off-by: Marc Zyngier
> ---
>
+ /* Address Translation instructions */
> + else if (params->CRn == 0b0111 && params->CRm == 0b1000)
> + ret = emulate_at(vcpu, params);
> +
So, in theory the NV bit shouldn't trap other Op0 == 1 instructions.
Would it be worth adding a WARN() or BUG() in an "else" branch here,
just in case?
Thanks,
--
Julien Thierry
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
On 06/21/2019 10:38 AM, Marc Zyngier wrote:
> From: Andre Przywara
>
> Whenever we need to restore the guest's system registers to the CPU, we
> now need to take care of the EL2 system registers as well. Most of them
> are accessed via traps only, but some have an immediate effect and also
>
(reg) {
> + case ELR_EL2:
> + return read_sysreg_el1(SYS_ELR);
Hmmm, This change feels a bit out of place.
Also, patch 13 added ELR_EL2 and SP_EL2 to the switch cases for physical
sysreg accesses. Now ELR_EL2 is moved out of the main switch cases and
SP_EL2 is co
EL2);
> case DBGVCR32_EL2: return read_sysreg_s(SYS_DBGVCR32_EL2);
> + case SP_EL2:return read_sysreg(sp_el1);
> + case ELR_EL2: return read_sysreg_el1(SYS_ELR);
> }
>
> immediate_read:
> @@ -125,6 +258,34 @@ void vcpu_write_sys_reg(struct kvm_vcp
a bit weirdly. Why do we care about anything else
than !nested_virt_in_use() ?
If nested virt is not in use then obviously we return the error.
If nested virt is in use then why do we care about EL1? Or should this
test read as "highest_el_is_32bit" ?
Thanks,
--
Julien Thierry
___
t_in_use(const struct kvm_vcpu *vcpu)
> +{
> + return cpus_have_const_cap(ARM64_HAS_NESTED_VIRT) &&
> + test_bit(KVM_ARM_VCPU_NESTED_VIRT, vcpu->arch.features);
Nit: You could make it even cheaper for some systems by ad
o opt-in for the capability of
running guests of their own.
Is it it likely to have negative impact a negative impact on the host
kernel? Or on guests that do not request use of nested virt?
If not I feel that this kernel parameter should be dropped.
Cheers,
--
Julien Thierry
_
Hi Andre,
(sorry for the delay in reply)
On 05/04/2019 16:31, Andre Przywara wrote:
> On Thu, 7 Mar 2019 08:36:09 +
> Julien Thierry wrote:
>
> Hi,
>
>> Linux has this convention that the lower 0x1000 bytes of the IO space
>> should not be used. (cf PCIBIOS_M
its(counter);
Shouldn't this depend on PMCR.LC as well? If PMCR.LC is clear we only
want the lower 32bits of the cycle counter.
Cheers,
--
Julien Thierry
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
On 12/06/2019 11:58, Julien Thierry wrote:
>
>
> On 12/06/2019 10:52, Marc Zyngier wrote:
>> Hi Julien,
>>
>> On Wed, 12 Jun 2019 09:16:21 +0100,
>> Julien Thierry wrote:
>>>
>>> Hi Marc,
>>>
>>> On 11/06/2019 18:03, Marc Z
On 12/06/2019 10:52, Marc Zyngier wrote:
> Hi Julien,
>
> On Wed, 12 Jun 2019 09:16:21 +0100,
> Julien Thierry wrote:
>>
>> Hi Marc,
>>
>> On 11/06/2019 18:03, Marc Zyngier wrote:
>>> Add the basic data structure that expresses an MSI to LPI
>
On 12/06/2019 09:16, Julien Thierry wrote:
> Hi Marc,
>
> On 11/06/2019 18:03, Marc Zyngier wrote:
[...]
>> +
>> +void vgic_lpi_translation_cache_init(struct kvm *kvm)
>> +{
>> +struct vgic_dist *dist = >arch.vgic;
>> +unsigned int sz;
&
>entry, >lpi_translation_cache);
Going through the series, it looks like this list is either empty
(before the cache init) or has a fixed number
(LPI_DEFAULT_PCPU_CACHE_SIZE * nr_cpus) of entries. And the list never
grows nor shrinks throughout the series, so it seems odd to be usin
ent.
>
I think this looks good now. Once the previous patch is fixed you can add:
Reviewed-by: Julien Thierry
Cheers,
Julien
> Suggested-by: Marc Zyngier
> Signed-off-by: Andrew Murray
> ---
> include/kvm/arm_pmu.h | 2 +
> virt/kvm/arm/pmu.c| 248 +
On 07/06/2019 09:51, Marc Zyngier wrote:
> On 07/06/2019 09:35, Julien Thierry wrote:
>> Hi Marc,
>>
>> On 06/06/2019 17:54, Marc Zyngier wrote:
>>> On a successful translation, preserve the parameters in the LPI
>>> translation cache. Each translation i
c_put_lpi_locked(kvm, cte->irq);
> +
> + vgic_get_irq_kref(irq);
If cte->irq == irq, can we avoid the ref putting and getting and just
move the list entry (and update cte)?
Cheers,
--
Julien Thierry
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
vm/arm/vgic/vgic.h
> index abeeffabc456..a58e1b263dca 100644
> --- a/virt/kvm/arm/vgic/vgic.h
> +++ b/virt/kvm/arm/vgic/vgic.h
> @@ -316,6 +316,9 @@ int vgic_copy_lpi_list(struct kvm *kvm, struct kvm_vcpu
> *vcpu, u32 **intid_ptr);
> int vgic_its_resolve_lpi(stru
y (and "only" 26
nops after it) for KVM_INDIRECT_VECTORS? Or does this not affect
performance that much to be of interest?
> stp x0, x1, [sp, #-16]!
> 662:
> b \target
> @@ -320,6 +321,7 @@ ENTRY(__bp_harden_h
ode
> + * that jumps over this.
> + */
> +#define KVM_VECTOR_PREAMBLE 4
Nit: I would use AARCH64_INSN_SIZE instead of 4 for the value if
possible. Makes it clear what the value of the vectore preamble
represent (and if we ad instruction we just multiply).
Otherwise the patch seems a good i
Hi Sudeep,
On 05/23/2019 11:34 AM, Sudeep Holla wrote:
> Currently since we don't support profiling using SPE in the guests,
> we just save the PMSCR_EL1, flush the profiling buffers and disable
> sampling. However in order to support simultaneous sampling both in
> the host and guests, we need
ext __debug_restore_host_context(struct kvm_vcpu *vcpu)
In the current state of the sources, __debug_switch_to_host() seems to
save the guest debug state before restoring the host's:
__debug_save_state(vcpu, guest_dbg, guest_ctxt);
Since you're splitting the switch_to into save/restore operations, it
feels like this would fit better __debug_save_guest_context() (currently
empty) rather than __debug_restore_host_context().
Cheers,
--
Julien Thierry
if (pmc->perf_event->state != PERF_EVENT_STATE_ACTIVE)
> + if (perf_event->state != PERF_EVENT_STATE_ACTIVE)
You forgot to set perf_event.
> kvm_debug("fail to enable perf event\n");
> }
> }
> @@ -192,6 +319,18 @@ void kvm_pmu_disable_counter_mask(struct kvm_vcpu *vcpu,
> u64 val)
> continue;
>
> pmc = >pmc[i];
> +
> + /*
> + * For high counters of chained events we must recreate the
> + * perf event with the long (64bit) attribute unset.
> + */
> + if (kvm_pmu_pmc_is_chained(pmc) &&
> + kvm_pmu_pmc_is_high_counter(i)) {
> + kvm_pmu_create_perf_event(vcpu, i);
> + continue;
> + }
> +
> + pmc = kvm_pmu_get_canonical_pmc(pmc);
Same as the enable case, we know pmc is already canonical, no need to
call the function.
Thanks,
--
Julien Thierry
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
t; + event = perf_event_create_kernel_counter(, -1, current,
>kvm_pmu_perf_overflow, pmc);
> + }
> +
> if (IS_ERR(event)) {
> pr_err_once("kvm: pmu event creation failed %ld\n",
>
er to
> + * determine if the event is enabled/disabled.
> + */
> + if (kvm_pmu_event_is_chained(vcpu, select_idx))
> + select_idx &= ~1UL;
> +
With both this and the pmc initialization it feels like we're doing
double the work/open coding things.
You could delay initialization of pmc here, after adjusting the
selec_idx to:
pmc = pmu->pmc[select_idx];
> reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> ? PMCCFILTR_EL0 : PMEVTYPER0_EL0 + select_idx;
> data = __vcpu_sys_reg(vcpu, reg);
> @@ -418,12 +555,28 @@ static void kvm_pmu_create_perf_event(struct kvm_vcpu
> *vcpu, u64 select_idx)
> attr.config = (select_idx == ARMV8_PMU_CYCLE_IDX) ?
> ARMV8_PMUV3_PERFCTR_CPU_CYCLES : eventsel;
>
> - counter = kvm_pmu_get_counter_value(vcpu, select_idx);
> - /* The initial sample period (overflow count) of an event. */
> - attr.sample_period = (-counter) & pmc->bitmask;
> + counter = kvm_pmu_get_pair_counter_value(vcpu, pmc);
> +
> + if (kvm_pmu_event_is_chained(vcpu, pmc->idx)) {
Nit: At that point I feel like kvm_pmu_pmc_is_chained() is a simpler
operation. (If we update the evtype we call the create function again
after setting the pair bitmap anyway, right?)
Cheers,
--
Julien Thierry
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
Hi,
On 04/04/2019 14:45, Andre Przywara wrote:
> On Thu, 7 Mar 2019 08:36:06 +
> Julien Thierry wrote:
>
> Hi,
>
>> The dynamic ioport allocation with IOPORT_EMPTY is currently only used
>> by PCI devices. Other devices use fixed ports for which they request
&g
On 28/03/2019 16:48, Dave Martin wrote:
> On Thu, Mar 28, 2019 at 02:29:23PM +0000, Julien Thierry wrote:
>>
>>
>> On 28/03/2019 12:27, Dave Martin wrote:
>>> On Wed, Mar 27, 2019 at 03:21:02PM +, Julien Thierry wrote:
>>>>
>&g
On 28/03/2019 12:27, Dave Martin wrote:
> On Wed, Mar 27, 2019 at 03:21:02PM +0000, Julien Thierry wrote:
>>
>>
>> On 27/03/2019 10:33, Dave Martin wrote:
>>> On Wed, Mar 27, 2019 at 09:47:42AM +, Julien Thierry wrote:
>>>> Hi Dave,
>&g
On 27/03/2019 10:41, Dave Martin wrote:
> On Wed, Mar 27, 2019 at 10:07:17AM +0000, Julien Thierry wrote:
>> Hi Dave,
>>
>> On 19/03/2019 17:52, Dave Martin wrote:
>>> This patch adds a kvm_arm_init_arch_resources() hook to perform
>>> subarch-speci
On 27/03/2019 10:33, Dave Martin wrote:
> On Wed, Mar 27, 2019 at 09:47:42AM +0000, Julien Thierry wrote:
>> Hi Dave,
>>
>> On 19/03/2019 17:52, Dave Martin wrote:
>>> This patch includes the SVE register IDs in the list returned by
>>> KVM_GET_REG_LIS
eviewed-by: Julien Thierry
Cheers,
Julien
> ---
>
> Changes since v5:
>
> * [Julien Thierry] Strip out has_vhe() sanity-check, which wasn't in
>the most logical place, and anyway doesn't really belong in this
>patch.
>
>Moved to KVM: arm64/sve: Allow u
ctor
> lengths available to an existing vcpu across reset.
>
> Signed-off-by: Dave Martin
>
Reviewed-by: Julien Thierry
Cheers,
Julien
> ---
>
> Changes since v5:
>
> * Refactored to make the code flow clearer and clarify responsiblity
>for the various
sequent patches will allow SVE to be turned on for guest vcpus,
> making it visible.
>
> Signed-off-by: Dave Martin
>
Reviewed-by: Julien Thierry
Cheers,
Julien
> ---
>
> Changes since v5:
>
> * [Julien Thierry] Delete overzealous BUILD_BUG_ON() checks.
>I
> +#define kvm_arm_vcpu_finalize(vcpu, what) (-EINVAL)
> +#define kvm_arm_vcpu_is_finalized(vcpu) true
I had a bit of hesitation for having a per feature ioctl call but in the
end this seems a simple enough to keep existing guest (not doing the
ioctl call) working and checking that the nec
do for arch
initialization?
In the same function I see a call to init_common_resources(), so I
would've pictured kvm_arm_init_arch_resources() being called close to it
(either right before or right after).
Otherwise:
Reviewed-by: Julien Thierry
Cheers,
--
Julien Thierry
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
these aliased
> registers.
>
> Signed-off-by: Dave Martin
>
Maybe it's because I already had reviewed the previous iteration, but
this time things do seem a bit clearer.
Reviewed-by: Julien Thierry
Thanks,
Julien
> ---
>
> Changes since v5:
>
> * [Julien Thierry]
tforward to add SVE register access
> support.
>
> Since SVE is an opt-in feature for userspace, this will not affect
> existing users.
>
> Signed-off-by: Dave Martin
>
Reviewed-by: Julien Thierry
Cheer,
Julien
> ---
>
> (Julien Thierry's Reviewed-by dropped
change.
>
> Signed-off-by: Dave Martin
>
Reviewed-by: Julien Thierry
Cheers,
Julien
> ---
>
> Changes since v5:
>
> * New patch.
>
>This reimplements part of the separately-posted patch "KVM: arm64:
>Factor out KVM_GET_REG_LIST core register enumera
al and Kconfig prerequisite of SVE.
>
> Signed-off-by: Dave Martin
>
Reviewed-by: Julien Thierry
> ---
>
> Changes since v5:
>
> * [Julien Thierry, Julien Grall] Commit message typo fixes
>
> * [Mark Rutland] Rename trap_class to hsr_ec, for consistency
at a non-SVE-enabled guest looks the same
> to userspace, irrespective of whether the kernel KVM implementation
> supports SVE.
>
> Signed-off-by: Dave Martin
Reviewed-by: Julien Thierry
Cheers,
Julien
>
> ---
>
> Changes since v5:
>
> * Port to th
When using an NMI for the PMU interrupt, taking any lock migh cause a
deadlock. The current PMU overflow handler in KVM takes takes locks when
trying to wake up a vcpu.
When overflow handler is called by an NMI, defer the vcpu waking in an
irq_work queue.
Signed-off-by: Julien Thierry
Cc
On 21/03/2019 06:08, Amit Daniel Kachhap wrote:
> Hi Julien,
>
> On 3/20/19 5:43 PM, Julien Thierry wrote:
>> Hi Amit,
>>
>> On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
>>> From: Mark Rutland
>>>
>>> When pointer authentication is supp
On 20/03/2019 15:04, Kristina Martsenko wrote:
> On 20/03/2019 13:37, Julien Thierry wrote:
>> Hi Amit,
>>
>> On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
>>> This adds sections for KVM API extension for pointer authentication.
>>> A brief descriptio
flags are
required to enable ptrauth in a guest. However it raises the question,
if we don't plan to support the features individually (because we
can't), should we really expose two feature flags? I seems odd to
introduce two flags that only do something if used together...
Cheers,
--
Julien Thierry
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
authentication key signing error in some situations.
>
> Signed-off-by: Mark Rutland
> [Only VHE, key switch in full assembly, vcpu_has_ptrauth checks
> , save host key in ptrauth exception trap]
> Signed-off-by: Amit Daniel Kachhap
> Reviewed-by: Julien Thierry
> Cc: Marc Zyngie
kvm_mmu.h and should moved to kvm_asm.h as well. I'd suggest
rephrasing it with something along the lines:
"Also, hyp_symbol_addr corresponding counterpart, kvm_ksym_ref, is
already in kvm_asm.h, making it more sensible to move kvm_symbol_addr to
the same file."
Otherwise:
Reviewed
restore_host(struct kvm_vcpu *vcpu)
> +{
> + struct kvm_cpu_context *host_ctxt;
> + struct kvm_host_data *host;
> + u32 events_guest, events_host;
> +
> + if (!has_vhe())
> + return;
> +
> + host_ctxt = vcpu->arch.host_cpu_context;
> + host = container_of(host_ctxt, struct kvm_host_data, host_ctxt);
> + events_guest = host->pmu_events.events_guest;
> + events_host = host->pmu_events.events_host;
> +
> + kvm_vcpu_pmu_enable_el0(events_host);
> + kvm_vcpu_pmu_disable_el0(events_guest);
Same question as above, after vcpu_put, it seems we've disabled at EL0
host events that are common to the guest and the host.
Thanks,
--
Julien Thierry
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
ven the exclude_{host,guest} attributes, determine if we are going
> + * to need to switch counters at guest entry/exit.
> + */
> +static bool kvm_pmu_switch_needed(struct perf_event_attr *attr)
> +{
> + /* Only switch if attributes are different */
> + return (attr->exclude_h
Provide a simple API with structure and function for reference counting.
This is inspired by the linux kref.
Signed-off-by: Julien Thierry
---
include/kvm/ref_cnt.h | 53 +++
1 file changed, 53 insertions(+)
create mode 100644 include/kvm
-off-by: Julien Thierry
---
mmio.c | 26 ++
1 file changed, 22 insertions(+), 4 deletions(-)
diff --git a/mmio.c b/mmio.c
index 61e1d47..03e1a76 100644
--- a/mmio.c
+++ b/mmio.c
@@ -2,6 +2,7 @@
#include "kvm/kvm-cpu.h"
#include "kvm/rbtree-interval.h&
The vesa framebuffer is only used by architectures that explicitly
require it (i.e. x86). Compile it out for architectures not using it, as
its current implementation might not work for them.
Signed-off-by: Julien Thierry
---
Makefile | 3 ++-
1 file changed, 2 insertions(+), 1 deletion
they are
allowed to use.
Signed-off-by: Julien Thierry
---
hw/pci-shmem.c | 3 ++-
hw/vesa.c| 4 ++--
include/kvm/ioport.h | 3 ---
include/kvm/pci.h| 2 ++
ioport.c | 18 --
pci.c| 8
vfio/core.c | 6 --
virtio
Local Bus specification, Implementation Notes.
Signed-off-by: Sami Mujawar
Signed-off-by: Julien Thierry
---
pci.c | 51 ---
1 file changed, 48 insertions(+), 3 deletions(-)
diff --git a/pci.c b/pci.c
index 689869c..9edefa5 100644
--- a/pci.c
+
Linux has this convention that the lower 0x1000 bytes of the IO space
should not be used. (cf PCIBIOS_MIN_IO).
Just allocate those bytes to prevent future allocation assigning it to
devices.
Signed-off-by: Julien Thierry
---
arm/pci.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arm
command. Support this by mapping/unmapping
regions when the corresponding response gets enabled/disabled.
Signed-off-by: Julien Thierry
---
vfio/core.c | 8 +++---
vfio/pci.c | 82 ++---
2 files changed, 82 insertions(+), 8 deletions
Current PCI IO region that is exposed through the DT contains ports that
are reserved by non-PCI devices.
Use the proper PCI IO start so that the region exposed through DT can
actually be used to reassign device BARs.
Signed-off-by: Julien Thierry
---
arm/include/arm-common/pci.h | 1 +
arm
Fix this by having PCI devices use 256 bytes ports for IO BARs.
Signed-off-by: Julien Thierry
---
hw/pci-shmem.c | 4 ++--
hw/vesa.c| 4 ++--
include/kvm/ioport.h | 1 -
pci.c| 2 +-
virtio/pci.c | 14 +++---
5 files changed, 12 inserti
Pausing all vcpus when reconfiguring something at run time is a big
overhead. Use rwlock to allow vcpu not accessing ressources being
reconfigured to continue running.
Signed-off-by: Julien Thierry
---
include/kvm/brlock.h | 11 ---
include/kvm/kvm.h| 2 --
2 files changed, 13
ent of PCI IO BARs
- Patches 9 to 12 make it possible to remap ioport and mmio regions
from vcpu threads, without pausing the entire VM
- Patches 13 to 16 adds the support for writting to BARs
Cheers,
Julien
-->
Julien Thierry (15):
Makefile: Only compile vesa for archs that need it
b
.
Signed-off-by: Julien Thierry
---
include/kvm/virtio-pci.h | 1 +
virtio/pci.c | 153 +++
2 files changed, 144 insertions(+), 10 deletions(-)
diff --git a/include/kvm/virtio-pci.h b/include/kvm/virtio-pci.h
index b70cadd..37ffe02 100644
The kvm argument is not passed to br_read_lock/unlock, this works for
the barrier implementation because the argument is not used. This ever
breaks if another lock implementation is used.
Signed-off-by: Julien Thierry
---
ioport.c | 4 ++--
mmio.c | 4 ++--
2 files changed, 4 insertions(+), 4
PCI devices support BAR reassignment. Get rid of the no longer needed
linux property.
Signed-off-by: Julien Thierry
---
arm/fdt.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/arm/fdt.c b/arm/fdt.c
index 980015b..219248e 100644
--- a/arm/fdt.c
+++ b/arm/fdt.c
@@ -140,7 +140,6 @@ static int
is being passed to all ioport callbacks, wrap it in
another structure that will get ref counted to avoid modifying all ioport
devices.
Signed-off-by: Julien Thierry
---
include/kvm/ioport.h | 1 -
ioport.c | 68 ++--
2 files changed, 45
receive a virtio_device.
Signed-off-by: Julien Thierry
---
virtio/pci.c | 69
1 file changed, 46 insertions(+), 23 deletions(-)
diff --git a/virtio/pci.c b/virtio/pci.c
index 5a6c0d0..32f9824 100644
--- a/virtio/pci.c
+++ b/virtio/pci.c
On 26/02/2019 12:13, Dave Martin wrote:
> On Thu, Feb 21, 2019 at 05:48:59PM +0000, Julien Thierry wrote:
>> Hi Dave,
>>
>> On 18/02/2019 19:52, Dave Martin wrote:
>>> This patch adds a new pseudo-register KVM_REG_ARM64_SVE_VLS to
>>> allow userspace to
On 26/02/2019 12:13, Dave Martin wrote:
> On Thu, Feb 21, 2019 at 03:23:37PM +0000, Julien Thierry wrote:
>> Hi Dave,
>>
>> On 18/02/2019 19:52, Dave Martin wrote:
>>> This patch adds the following registers for access via the
>&
On 26/02/2019 12:06, Dave Martin wrote:
> On Wed, Feb 20, 2019 at 11:12:49AM +0000, Julien Thierry wrote:
>> Hi Dave,
>>
>> On 18/02/2019 19:52, Dave Martin wrote:
>>> Due to the way the effective SVE vector length is controlled and
>>> trapped at differe
vm_reset_sve(struct kvm_vcpu *vcpu)
> if (!system_supports_sve())
> return -EINVAL;
>
> + /* Verify that KVM startup enforced this when SVE was detected: */
> + if (WARN_ON(!has_vhe()))
> + return -EINVAL;
I'm wondering, wouldn't it make more sense to ch
*/
> + vcpu->arch.flags |= KVM_ARM64_GUEST_HAS_SVE;
> +
> + return 0;
> +}
> +
> int kvm_arm_vcpu_finalize(struct kvm_vcpu *vcpu)
> {
> if (likely(kvm_arm_vcpu_finalized(vcpu)))
> return 0;
>
> + if (vcpu_has_sve(vcpu)) {
>
> Cc: Marc Zyngier
> Cc: James Morse
> Cc: Julien Thierry
> Cc: Suzuki K Pouloze
> Signed-off-by: Shaokun Zhang
> ---
> virt/kvm/arm/arch_timer.c | 2 --
> 1 file changed, 2 deletions(-)
>
> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
>
return -EFAULT;
> }
>
> for (n = 0; n < SVE_NUM_PREGS; n++) {
> - if (put_user(KVM_REG_ARM64_SVE_PREG(n, i), (*uind)++))
> + reg = KVM_REG_ARM64_SVE_PREG(n, i);
> +
from the list, since userspace is required to access the Z-
> registers instead to access their context. For the variably-sized
> SVE registers, the appropriate set of slice IDs are enumerated,
> depending on the maximum vector length for the vcpu.
>
> Signed-off-by: Dave
efault: break; /* fall through */
> + }
>
> if (is_timer_reg(reg->id))
> return get_timer_reg(vcpu, reg);
> @@ -390,12 +504,12 @@ int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct
> kvm_one_reg *reg)
> if (
ward to add SVE register access
> support.
>
> Since SVE is an opt-in feature for userspace, this will not affect
> existing users.
>
> Signed-off-by: Dave Martin
Reviewed-by: Julien Thierry
> ---
> arch/arm64/kvm/guest.c | 38 +++---
at a non-SVE-enabled guest looks the same
> to userspace, irrespective of whether the kernel KVM implementation
> supports SVE.
>
> Signed-off-by: Dave Martin
>
Reviewed-by: Julien Thierry
> ---
>
> Changes since v4:
>
> * Remove annoying linebreak in assignment.
>
an
> architectural and Kconfig prerequisite of SVE.
>
> Signed-off-by: Dave Martin
>
Otherwise:
Reviewed-by: Julien Thierry
> ---
>
> Changes since v4:
>
> * Remove if_sve() helper in favour of open-coded static key checks.
>
> * Explicitly merg
dividual system
> registers such as the CPU ID registers, in addition to completely
> hiding register where appropriate.
>
> Signed-off-by: Dave Martin
Reviewed-by: Julien Thierry
>
> ---
>
> Changes since v4:
>
> * Move from a boolean sysreg property that just s
static inline void reset_unknown(struct kvm_vcpu *vcpu,
> {
> BUG_ON(!r->reg);
> BUG_ON(r->reg >= NR_SYS_REGS);
> - __vcpu_sys_reg(vcpu, r->reg) = 0x1de7ec7edbadc0deULL;
> +
> + /* If non-zero, r->val specifies which register bits are RES0: *
s all VQs in the committed set.
> + * This function is called during the bring-up of late secondary CPUs only.
Oh I see, this is for late CPUs. So you can probably disregard my
comment on the warning in the previous patch.
If you respin this series, I feel it would be more useful to have th
VE_VQ_MAX)
> + return 0; /* no mismatches */
> +
> + /*
> + * Mismatches above sve_max_virtualisable_vl are fine, since
> + * no guest is allowed to configure ZCR_EL2.LEN to exceed this:
> + */
> + if (sve_vl_f
rforming
> the "load" operation, just like we do when the interrupt actually fires.
> If the timer has a pending virtual interrupt at this stage, then we
> can safely flag the physical interrupt as being active, which prevents
> spurious exits.
>
> Signed-off-by: Marc Zyngi
1 - 100 of 231 matches
Mail list logo