Re: [PATCH] KVM: arm64: Move virt/kvm/arm to arch/arm64

2020-05-13 Thread Will Deacon
On Wed, May 13, 2020 at 11:40:34AM +0100, Marc Zyngier wrote:
> Now that the 32bit KVM/arm host is a distant memory, let's move the
> whole of the KVM/arm64 code into the arm64 tree.
> 
> As they said in the song: Welcome Home (Sanitarium).
> 
> Signed-off-by: Marc Zyngier 

Acked-by: Will Deacon 

Will
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


RE: [PATCH v11 00/13] SMMUv3 Nested Stage Setup (IOMMU part)

2020-05-13 Thread Shameerali Kolothum Thodi
Hi Eric,

> -Original Message-
> From: Auger Eric [mailto:eric.au...@redhat.com]
> Sent: 13 May 2020 14:29
> To: Shameerali Kolothum Thodi ;
> Zhangfei Gao ; eric.auger@gmail.com;
> io...@lists.linux-foundation.org; linux-ker...@vger.kernel.org;
> k...@vger.kernel.org; kvmarm@lists.cs.columbia.edu; w...@kernel.org;
> j...@8bytes.org; m...@kernel.org; robin.mur...@arm.com
> Cc: jean-phili...@linaro.org; alex.william...@redhat.com;
> jacob.jun@linux.intel.com; yi.l@intel.com; peter.mayd...@linaro.org;
> t...@semihalf.com; bbhush...@marvell.com
> Subject: Re: [PATCH v11 00/13] SMMUv3 Nested Stage Setup (IOMMU part)
> 
[...]

> >>> Yes that's normal this series is not meant to support vSVM at this stage.
> >>>
> >>> I intend to add the missing pieces during the next weeks.
> >>
> >> Thanks for that. I have made an attempt to add the vSVA based on
> >> your v10 + JPBs sva patches. The host kernel and Qemu changes can
> >> be found here[1][2].
> >>
> >> This basically adds multiple pasid support on top of your changes.
> >> I have done some basic sanity testing and we have some initial success
> >> with the zip vf dev on our D06 platform. Please note that the STALL event 
> >> is
> >> not yet supported though, but works fine if we mlock() guest usr mem.
> >
> > I have added STALL support for our vSVA prototype and it seems to be
> > working(on our hardware). I have updated the kernel and qemu branches
> with
> > the same[1][2]. I should warn you though that these are prototype code and I
> am pretty
> > much re-using the VFIO_IOMMU_SET_PASID_TABLE interface for almost
> everything.
> > But thought of sharing, in case if it is useful somehow!.
> 
> Thank you again for sharing the POC. I looked at the kernel and QEMU
> branches.
> 
> Here are some preliminary comments:
> - "arm-smmu-v3: Reset S2TTB while switching back from nested stage":  as
> you mentionned S2TTB reset now is featured in v11

Yes.

> - "arm-smmu-v3: Add support for multiple pasid in nested mode": I could
> easily integrate this into my series. Update the iommu api first and
> pass multiple CD info in a separate patch

Ok.
> - "arm-smmu-v3: Add support to Invalidate CD": CD invalidation should be
> cascaded to host through the PASID cache invalidation uapi (no pb you
> warned us for the POC you simply used VFIO_IOMMU_SET_PASID_TABLE). I
> think I should add this support in my original series although it does
> not seem to trigger any issue up to now.

Agree. Cache invalidation uapi is a better interface for this. Also I don’t 
think
this matters for non-vsva cases as Guest kernel table/CD(pasid 0) will never
get invalidated. 

> - "arm-smmu-v3: Remove duplication of fault propagation". I understand
> the transcode is done somewhere else with SVA but we still need to do it
> if a single CD is used, right? I will review the SVA code to better
> understand.

Hmm..not sure. Need to take another look to see whether we need a special
handling for single CD or not.

> - for the STALL response injection I would tend to use a new VFIO region
> for responses. At the moment there is a single VFIO region for reporting
> the fault.

Sure. That will be much cleaner and probably improve the context switch
latency. Another thing I noted with STALL is that pasid_valid flag needs to be
taken care in the SVA kernel path. 

"iommu: Remove pasid validity check for STALL model page response msg"
Not sure this one is a proper way to handle this.
 
> On QEMU side:
> - I am currently working on 3.2 range invalidation support which is
> needed for DPDK/VFIO
> - While at it I will look at how to incrementally introduce some of the
> features you need in this series.

Ok. 

Thanks for taking a look at the POC.

Cheers,
Shameer

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 02/15] arm64: kvm: Formalize host-hyp hypcall ABI

2020-05-13 Thread David Brazdil
> > In fact David already has a nice patch that transforms the whole thing
> > in a jump table, which is much nicer. I'll let him share the details
> > :)
> 
> Ah! Looking forward to reviewing it then!

It's not actually that different. It still has the same header file, just uses
the macros to generate a jump table rather than an array of function pointers.
The main advantage being that we can avoid .hyp.text dependency on
physvirt_offset. Feel free to have a look, branch 'topic/el2-obj-wip' at:
https://android-kvm.googlesource.com/linux

Perhaps this is not worth the trouble. We do hope to get to a point where the
ABI between .text and .hyp.text is formalized, but in my mind that ABI is
unlikely to be using this same hypcall path.

On the other hand, I've played with preserving the function-pointer interface
in the last couple of days and later in this series we do end up having to
declare all of the hcall entry points (which now have two ELF symbols), so we
end up with a similar table regardless, just with no IDs assigned.

-David
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 01/15] arm64: kvm: Unify users of HVC instruction

2020-05-13 Thread David Brazdil
On Thu, May 07, 2020 at 03:36:33PM +0100, Quentin Perret wrote:
> On Thursday 07 May 2020 at 15:01:07 (+0100), Marc Zyngier wrote:
> > >  /*
> > > - * u64 __kvm_call_hyp(void *hypfn, ...);
> > > + * u64 __kvm_call_hyp(unsigned long arg, ...);
> > >   *
> > >   * This is not really a variadic function in the classic C-way and care 
> > > must
> > >   * be taken when calling this to ensure parameters are passed in 
> > > registers
> > >   * only, since the stack will change between the caller and the callee.
> > > - *
> > > - * Call the function with the first argument containing a pointer to the
> > > - * function you wish to call in Hyp mode, and subsequent arguments will 
> > > be
> > > - * passed as x0, x1, and x2 (a maximum of 3 arguments in addition to the
> > > - * function pointer can be passed).  The function being called must be 
> > > mapped
> > > - * in Hyp mode (see init_hyp_mode in arch/arm/kvm/arm.c).  Return values 
> > > are
> > > - * passed in x0.
> > > - *
> > > - * A function pointer with a value less than 0xfff has a special meaning,
> > > - * and is used to implement hyp stubs in the same way as in
> > > - * arch/arm64/kernel/hyp_stub.S.
> > 
> > I don't think any of this becomes obsolete with this patch (apart from
> > the reference to 32bit), and only changes with patch #2. Or am I
> > misunderstanding something?
> 
> Nope, I think you're right. To be fair, this patch has changed quite
> a bit since the first version (which did change that comment a little
> later IIRC), but David has done all the hard work on top so I'll let
> him answer that one.
They have a different meaning in the three different contexts:
1) hyp-stub HVC: hcall ID + 3 args
2) hyp-init HVC: 4 args (no hcall ID)
3) host HVC: function  pointer + 3 args
The patch was meant to have all three use the same HVC routine, eventually
converging on host HVCs also using 'hcall ID + 3 args' in patch 02, but it is
a bit forced at this point. I can drop this patch from the series and we can
clean this up later if it still makes sense.

> 
> And David, feel free to take the authorship for this patch -- I barely
> recognize it (for the better), so it's more than fair if you get the
> credit :)
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v11 00/13] SMMUv3 Nested Stage Setup (IOMMU part)

2020-05-13 Thread Auger Eric
Hi Shameer,

On 5/7/20 8:59 AM, Shameerali Kolothum Thodi wrote:
> Hi Eric,
> 
>> -Original Message-
>> From: Shameerali Kolothum Thodi
>> Sent: 30 April 2020 10:38
>> To: 'Auger Eric' ; Zhangfei Gao
>> ; eric.auger@gmail.com;
>> io...@lists.linux-foundation.org; linux-ker...@vger.kernel.org;
>> k...@vger.kernel.org; kvmarm@lists.cs.columbia.edu; w...@kernel.org;
>> j...@8bytes.org; m...@kernel.org; robin.mur...@arm.com
>> Cc: jean-phili...@linaro.org; alex.william...@redhat.com;
>> jacob.jun@linux.intel.com; yi.l@intel.com; peter.mayd...@linaro.org;
>> t...@semihalf.com; bbhush...@marvell.com
>> Subject: RE: [PATCH v11 00/13] SMMUv3 Nested Stage Setup (IOMMU part)
>>
>> Hi Eric,
>>
>>> -Original Message-
>>> From: Auger Eric [mailto:eric.au...@redhat.com]
>>> Sent: 16 April 2020 08:45
>>> To: Zhangfei Gao ; eric.auger@gmail.com;
>>> io...@lists.linux-foundation.org; linux-ker...@vger.kernel.org;
>>> k...@vger.kernel.org; kvmarm@lists.cs.columbia.edu; w...@kernel.org;
>>> j...@8bytes.org; m...@kernel.org; robin.mur...@arm.com
>>> Cc: jean-phili...@linaro.org; Shameerali Kolothum Thodi
>>> ; alex.william...@redhat.com;
>>> jacob.jun@linux.intel.com; yi.l@intel.com; peter.mayd...@linaro.org;
>>> t...@semihalf.com; bbhush...@marvell.com
>>> Subject: Re: [PATCH v11 00/13] SMMUv3 Nested Stage Setup (IOMMU part)
>>>
>>> Hi Zhangfei,
>>>
>>> On 4/16/20 6:25 AM, Zhangfei Gao wrote:


 On 2020/4/14 下午11:05, Eric Auger wrote:
> This version fixes an issue observed by Shameer on an SMMU 3.2,
> when moving from dual stage config to stage 1 only config.
> The 2 high 64b of the STE now get reset. Otherwise, leaving the
> S2TTB set may cause a C_BAD_STE error.
>
> This series can be found at:
> https://github.com/eauger/linux/tree/v5.6-2stage-v11_10.1
> (including the VFIO part)
> The QEMU fellow series still can be found at:
> https://github.com/eauger/qemu/tree/v4.2.0-2stage-rfcv6
>
> Users have expressed interest in that work and tested v9/v10:
> - https://patchwork.kernel.org/cover/11039995/#23012381
> - https://patchwork.kernel.org/cover/11039995/#23197235
>
> Background:
>
> This series brings the IOMMU part of HW nested paging support
> in the SMMUv3. The VFIO part is submitted separately.
>
> The IOMMU API is extended to support 2 new API functionalities:
> 1) pass the guest stage 1 configuration
> 2) pass stage 1 MSI bindings
>
> Then those capabilities gets implemented in the SMMUv3 driver.
>
> The virtualizer passes information through the VFIO user API
> which cascades them to the iommu subsystem. This allows the guest
> to own stage 1 tables and context descriptors (so-called PASID
> table) while the host owns stage 2 tables and main configuration
> structures (STE).
>
>

 Thanks Eric

 Tested v11 on Hisilicon kunpeng920 board via hardware zip accelerator.
 1. no-sva works, where guest app directly use physical address via ioctl.
>>> Thank you for the testing. Glad it works for you.
 2. vSVA still not work, same as v10,
>>> Yes that's normal this series is not meant to support vSVM at this stage.
>>>
>>> I intend to add the missing pieces during the next weeks.
>>
>> Thanks for that. I have made an attempt to add the vSVA based on
>> your v10 + JPBs sva patches. The host kernel and Qemu changes can
>> be found here[1][2].
>>
>> This basically adds multiple pasid support on top of your changes.
>> I have done some basic sanity testing and we have some initial success
>> with the zip vf dev on our D06 platform. Please note that the STALL event is
>> not yet supported though, but works fine if we mlock() guest usr mem.
> 
> I have added STALL support for our vSVA prototype and it seems to be
> working(on our hardware). I have updated the kernel and qemu branches with
> the same[1][2]. I should warn you though that these are prototype code and I 
> am pretty
> much re-using the VFIO_IOMMU_SET_PASID_TABLE interface for almost everything.
> But thought of sharing, in case if it is useful somehow!.

Thank you again for sharing the POC. I looked at the kernel and QEMU
branches.

Here are some preliminary comments:
- "arm-smmu-v3: Reset S2TTB while switching back from nested stage":  as
you mentionned S2TTB reset now is featured in v11
- "arm-smmu-v3: Add support for multiple pasid in nested mode": I could
easily integrate this into my series. Update the iommu api first and
pass multiple CD info in a separate patch
- "arm-smmu-v3: Add support to Invalidate CD": CD invalidation should be
cascaded to host through the PASID cache invalidation uapi (no pb you
warned us for the POC you simply used VFIO_IOMMU_SET_PASID_TABLE). I
think I should add this support in my original series although it does
not seem to trigger any issue up to now.
- "arm-smmu-v3: Remove duplication of fault propagation". I understand
the 

Re: [PATCH 08/26] KVM: arm64: Use TTL hint in when invalidating stage-2 translations

2020-05-13 Thread Andrew Scull
On Tue, May 12, 2020 at 01:04:31PM +0100, James Morse wrote:
> Hi Andrew,
> 
> On 07/05/2020 16:13, Andrew Scull wrote:
> >> @@ -176,7 +177,7 @@ static void clear_stage2_pud_entry(struct kvm_s2_mmu 
> >> *mmu, pud_t *pud, phys_addr
> >>pmd_t *pmd_table __maybe_unused = stage2_pmd_offset(kvm, pud, 0);
> >>VM_BUG_ON(stage2_pud_huge(kvm, *pud));
> >>stage2_pud_clear(kvm, pud);
> >> -  kvm_tlb_flush_vmid_ipa(mmu, addr);
> >> +  kvm_tlb_flush_vmid_ipa(mmu, addr, S2_NO_LEVEL_HINT);
> >>stage2_pmd_free(kvm, pmd_table);
> >>put_page(virt_to_page(pud));
> >>  }
> >> @@ -186,7 +187,7 @@ static void clear_stage2_pmd_entry(struct kvm_s2_mmu 
> >> *mmu, pmd_t *pmd, phys_addr
> >>pte_t *pte_table = pte_offset_kernel(pmd, 0);
> >>VM_BUG_ON(pmd_thp_or_huge(*pmd));
> >>pmd_clear(pmd);
> >> -  kvm_tlb_flush_vmid_ipa(mmu, addr);
> >> +  kvm_tlb_flush_vmid_ipa(mmu, addr, S2_NO_LEVEL_HINT);
> >>free_page((unsigned long)pte_table);
> >>put_page(virt_to_page(pmd));
> >>  }
> > 
> > Going by the names, is it possible to give a better level hint for these
> > cases?
> 
> There is no leaf entry being invalidated here. After clearing the range, we 
> found we'd
> emptied (and invalidated) a whole page of mappings:
> | if (stage2_pmd_table_empty(kvm, start_pmd))
> | clear_stage2_pud_entry(mmu, pud, start_addr);
> 
> Now we want to remove the link to the empty page so we can free it. We are 
> changing the
> structure of the tables, not what gets mapped.
> 
> I think this is why we need the un-hinted behaviour, to invalidate "any level 
> of the
> translation table walk required to translate the specified IPA". Otherwise 
> the hardware
> can look for a leaf at the indicated level, find none, and do nothing.
> 
> 
> This is sufficiently horrible, its possible I've got it completely wrong! 
> (does it make
> sense?)

Ok. `addr` is an IPA, that IPA is now omitted from the map so doesn't
appear in any entry of the table, least of all a leaf entry. That makes
sense.

Is there a convention to distinguish IPA and PA similar to the
distinction for VA or does kvmarm just use phys_addr_t all round?

It seems like the TTL patches are failry self contained if it would be
easier to serparate them out from these larger series?
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH] KVM: arm64: Simplify __kvm_timer_set_cntvoff implementation

2020-05-13 Thread Marc Zyngier
Now that this function isn't constrained by the 32bit PCS,
let's simplify it by taking a single 64bit offset instead
of two 32bit parameters.

Signed-off-by: Marc Zyngier 
---
 arch/arm64/include/asm/kvm_asm.h |  2 +-
 virt/kvm/arm/arch_timer.c| 12 +---
 virt/kvm/arm/hyp/timer-sr.c  |  3 +--
 3 files changed, 3 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 7c7eeeaab9fa..59e314f38e43 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -64,7 +64,7 @@ extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, 
phys_addr_t ipa);
 extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
 extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu);
 
-extern void __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high);
+extern void __kvm_timer_set_cntvoff(u64 cntvoff);
 
 extern int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu);
 
diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
index 93bd59b46848..487eba9f87cd 100644
--- a/virt/kvm/arm/arch_timer.c
+++ b/virt/kvm/arm/arch_timer.c
@@ -451,17 +451,7 @@ static void timer_restore_state(struct arch_timer_context 
*ctx)
 
 static void set_cntvoff(u64 cntvoff)
 {
-   u32 low = lower_32_bits(cntvoff);
-   u32 high = upper_32_bits(cntvoff);
-
-   /*
-* Since kvm_call_hyp doesn't fully support the ARM PCS especially on
-* 32-bit systems, but rather passes register by register shifted one
-* place (we put the function address in r0/x0), we cannot simply pass
-* a 64-bit value as an argument, but have to split the value in two
-* 32-bit halves.
-*/
-   kvm_call_hyp(__kvm_timer_set_cntvoff, low, high);
+   kvm_call_hyp(__kvm_timer_set_cntvoff, cntvoff);
 }
 
 static inline void set_timer_irq_phys_active(struct arch_timer_context *ctx, 
bool active)
diff --git a/virt/kvm/arm/hyp/timer-sr.c b/virt/kvm/arm/hyp/timer-sr.c
index ff76e6845fe4..fb5c0be33223 100644
--- a/virt/kvm/arm/hyp/timer-sr.c
+++ b/virt/kvm/arm/hyp/timer-sr.c
@@ -10,9 +10,8 @@
 
 #include 
 
-void __hyp_text __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high)
+void __hyp_text __kvm_timer_set_cntvoff(u64 cntvoff)
 {
-   u64 cntvoff = (u64)cntvoff_high << 32 | cntvoff_low;
write_sysreg(cntvoff, cntvoff_el2);
 }
 
-- 
2.20.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH] KVM: arm64: Move virt/kvm/arm to arch/arm64

2020-05-13 Thread Marc Zyngier
Now that the 32bit KVM/arm host is a distant memory, let's move the
whole of the KVM/arm64 code into the arm64 tree.

As they said in the song: Welcome Home (Sanitarium).

Signed-off-by: Marc Zyngier 
---
 MAINTAINERS   |   1 -
 arch/arm64/kvm/Makefile   |  44 ++--
 {virt/kvm/arm => arch/arm64/kvm}/aarch32.c|   0
 {virt/kvm/arm => arch/arm64/kvm}/arch_timer.c |   0
 {virt/kvm/arm => arch/arm64/kvm}/arm.c|   2 +-
 arch/arm64/kvm/handle_exit.c  |   2 +-
 arch/arm64/kvm/hyp/Makefile   |   9 +-
 .../kvm/arm => arch/arm64/kvm}/hyp/aarch32.c  |   0
 .../kvm/arm => arch/arm64/kvm}/hyp/timer-sr.c |   0
 .../arm => arch/arm64/kvm}/hyp/vgic-v3-sr.c   |   4 -
 {virt/kvm/arm => arch/arm64/kvm}/hypercalls.c |   0
 {virt/kvm/arm => arch/arm64/kvm}/mmio.c   |   0
 {virt/kvm/arm => arch/arm64/kvm}/mmu.c|   0
 {virt/kvm/arm => arch/arm64/kvm}/perf.c   |   0
 .../arm/pmu.c => arch/arm64/kvm/pmu-emul.c|   0
 {virt/kvm/arm => arch/arm64/kvm}/psci.c   |   0
 {virt/kvm/arm => arch/arm64/kvm}/pvtime.c |   0
 arch/arm64/kvm/trace.h| 216 +-
 .../arm/trace.h => arch/arm64/kvm/trace_arm.h |  11 +-
 arch/arm64/kvm/trace_handle_exit.h| 215 +
 arch/arm64/kvm/vgic-sys-reg-v3.c  |   2 +-
 {virt/kvm/arm => arch/arm64/kvm}/vgic/trace.h |   2 +-
 .../arm => arch/arm64/kvm}/vgic/vgic-debug.c  |   0
 .../arm => arch/arm64/kvm}/vgic/vgic-init.c   |   0
 .../arm => arch/arm64/kvm}/vgic/vgic-irqfd.c  |   0
 .../arm => arch/arm64/kvm}/vgic/vgic-its.c|   0
 .../arm64/kvm}/vgic/vgic-kvm-device.c |   0
 .../arm64/kvm}/vgic/vgic-mmio-v2.c|   0
 .../arm64/kvm}/vgic/vgic-mmio-v3.c|   0
 .../arm => arch/arm64/kvm}/vgic/vgic-mmio.c   |   0
 .../arm => arch/arm64/kvm}/vgic/vgic-mmio.h   |   0
 .../kvm/arm => arch/arm64/kvm}/vgic/vgic-v2.c |   0
 .../kvm/arm => arch/arm64/kvm}/vgic/vgic-v3.c |   2 -
 .../kvm/arm => arch/arm64/kvm}/vgic/vgic-v4.c |   0
 {virt/kvm/arm => arch/arm64/kvm}/vgic/vgic.c  |   0
 {virt/kvm/arm => arch/arm64/kvm}/vgic/vgic.h  |   0
 36 files changed, 253 insertions(+), 257 deletions(-)
 rename {virt/kvm/arm => arch/arm64/kvm}/aarch32.c (100%)
 rename {virt/kvm/arm => arch/arm64/kvm}/arch_timer.c (100%)
 rename {virt/kvm/arm => arch/arm64/kvm}/arm.c (99%)
 rename {virt/kvm/arm => arch/arm64/kvm}/hyp/aarch32.c (100%)
 rename {virt/kvm/arm => arch/arm64/kvm}/hyp/timer-sr.c (100%)
 rename {virt/kvm/arm => arch/arm64/kvm}/hyp/vgic-v3-sr.c (99%)
 rename {virt/kvm/arm => arch/arm64/kvm}/hypercalls.c (100%)
 rename {virt/kvm/arm => arch/arm64/kvm}/mmio.c (100%)
 rename {virt/kvm/arm => arch/arm64/kvm}/mmu.c (100%)
 rename {virt/kvm/arm => arch/arm64/kvm}/perf.c (100%)
 rename virt/kvm/arm/pmu.c => arch/arm64/kvm/pmu-emul.c (100%)
 rename {virt/kvm/arm => arch/arm64/kvm}/psci.c (100%)
 rename {virt/kvm/arm => arch/arm64/kvm}/pvtime.c (100%)
 rename virt/kvm/arm/trace.h => arch/arm64/kvm/trace_arm.h (97%)
 create mode 100644 arch/arm64/kvm/trace_handle_exit.h
 rename {virt/kvm/arm => arch/arm64/kvm}/vgic/trace.h (93%)
 rename {virt/kvm/arm => arch/arm64/kvm}/vgic/vgic-debug.c (100%)
 rename {virt/kvm/arm => arch/arm64/kvm}/vgic/vgic-init.c (100%)
 rename {virt/kvm/arm => arch/arm64/kvm}/vgic/vgic-irqfd.c (100%)
 rename {virt/kvm/arm => arch/arm64/kvm}/vgic/vgic-its.c (100%)
 rename {virt/kvm/arm => arch/arm64/kvm}/vgic/vgic-kvm-device.c (100%)
 rename {virt/kvm/arm => arch/arm64/kvm}/vgic/vgic-mmio-v2.c (100%)
 rename {virt/kvm/arm => arch/arm64/kvm}/vgic/vgic-mmio-v3.c (100%)
 rename {virt/kvm/arm => arch/arm64/kvm}/vgic/vgic-mmio.c (100%)
 rename {virt/kvm/arm => arch/arm64/kvm}/vgic/vgic-mmio.h (100%)
 rename {virt/kvm/arm => arch/arm64/kvm}/vgic/vgic-v2.c (100%)
 rename {virt/kvm/arm => arch/arm64/kvm}/vgic/vgic-v3.c (99%)
 rename {virt/kvm/arm => arch/arm64/kvm}/vgic/vgic-v4.c (100%)
 rename {virt/kvm/arm => arch/arm64/kvm}/vgic/vgic.c (100%)
 rename {virt/kvm/arm => arch/arm64/kvm}/vgic/vgic.h (100%)

diff --git a/MAINTAINERS b/MAINTAINERS
index 091ec22c1a23..6c5b928989ed 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -9295,7 +9295,6 @@ F:arch/arm64/include/asm/kvm*
 F: arch/arm64/include/uapi/asm/kvm*
 F: arch/arm64/kvm/
 F: include/kvm/arm_*
-F: virt/kvm/arm/
 
 KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)
 L: linux-m...@vger.kernel.org
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 5ffbdc39e780..7a3768538343 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -3,37 +3,37 @@
 # Makefile for Kernel-based Virtual Machine module
 #
 
-ccflags-y += -I $(srctree)/$(src) -I $(srctree)/virt/kvm/arm/vgic
+ccflags-y += -I $(srctree)/$(src)
 
 KVM=../../../virt/kvm
 
 obj-$(CONFIG_KVM_ARM_HOST) += kvm.o
 obj-$(CONFIG_KVM_ARM_HOST) += hyp/
 
-kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o 
$(KVM)/eventfd.o $(KVM)/vfio.o

[PATCH] KVM: arm64: Use cpus_have_final_cap for has_vhe()

2020-05-13 Thread Marc Zyngier
By the time we start using the has_vhe() helper, we have long
discovered whether we are running VHE or not. It thus makes
sense to use cpus_have_final_cap() instead of cpus_have_const_cap(),
which leads to a small text size reduction.

Signed-off-by: Marc Zyngier 
---
 arch/arm64/include/asm/virt.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
index 61fd26752adc..5051b388c654 100644
--- a/arch/arm64/include/asm/virt.h
+++ b/arch/arm64/include/asm/virt.h
@@ -85,7 +85,7 @@ static inline bool is_kernel_in_hyp_mode(void)
 
 static __always_inline bool has_vhe(void)
 {
-   if (cpus_have_const_cap(ARM64_HAS_VIRT_HOST_EXTN))
+   if (cpus_have_final_cap(ARM64_HAS_VIRT_HOST_EXTN))
return true;
 
return false;
-- 
2.20.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH V2] arm64/cpufeature: Drop open encodings while extracting parange

2020-05-13 Thread Anshuman Khandual
Currently there are multiple instances of parange feature width mask open
encodings while fetching it's value. Even the width mask value (0x7) itself
is not accurate. It should be (0xf) per ID_AA64MMFR0_EL1.PARange[3:0] as in
ARM ARM (0487F.a). Replace them with cpuid_feature_extract_unsigned_field()
which can extract given standard feature (4 bits width i.e 0xf mask) field.

Cc: Catalin Marinas 
Cc: Will Deacon 
Cc: Marc Zyngier 
Cc: James Morse 
Cc: kvmarm@lists.cs.columbia.edu
Cc: linux-arm-ker...@lists.infradead.org
Cc: linux-ker...@vger.kernel.org

Signed-off-by: Anshuman Khandual 
---
Changes in V2:

- Used cpuid_feature_extract_unsigned_field() per Mark

Changes in V1: (https://patchwork.kernel.org/patch/11541913/)

 arch/arm64/kernel/cpufeature.c |  3 ++-
 arch/arm64/kvm/reset.c | 11 ---
 2 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 30917fe7942a..958a96947c2c 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2201,7 +2201,8 @@ void verify_hyp_capabilities(void)
}
 
/* Verify IPA range */
-   parange = mmfr0 & 0x7;
+   parange = cpuid_feature_extract_unsigned_field(mmfr0,
+   ID_AA64MMFR0_PARANGE_SHIFT);
ipa_max = id_aa64mmfr0_parange_to_phys_shift(parange);
if (ipa_max < get_kvm_ipa_limit()) {
pr_crit("CPU%d: IPA range mismatch\n", smp_processor_id());
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 841b492ff334..bd9f66a81e1e 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -348,8 +348,11 @@ u32 get_kvm_ipa_limit(void)
 void kvm_set_ipa_limit(void)
 {
unsigned int ipa_max, pa_max, va_max, parange;
+   u64 mmfr0;
 
-   parange = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1) & 0x7;
+   mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
+   parange = cpuid_feature_extract_unsigned_field(mmfr0,
+   ID_AA64MMFR0_PARANGE_SHIFT);
pa_max = id_aa64mmfr0_parange_to_phys_shift(parange);
 
/* Clamp the IPA limit to the PA size supported by the kernel */
@@ -395,7 +398,7 @@ void kvm_set_ipa_limit(void)
  */
 int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type)
 {
-   u64 vtcr = VTCR_EL2_FLAGS;
+   u64 vtcr = VTCR_EL2_FLAGS, mmfr0;
u32 parange, phys_shift;
u8 lvls;
 
@@ -411,7 +414,9 @@ int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long 
type)
phys_shift = KVM_PHYS_SHIFT;
}
 
-   parange = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1) & 7;
+   mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
+   parange = cpuid_feature_extract_unsigned_field(mmfr0,
+   ID_AA64MMFR0_PARANGE_SHIFT);
if (parange > ID_AA64MMFR0_PARANGE_MAX)
parange = ID_AA64MMFR0_PARANGE_MAX;
vtcr |= parange << VTCR_EL2_PS_SHIFT;
-- 
2.20.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm