Re: [PATCH 02/12] firmware: arm_ffa: Move comment before the field it is documenting

2022-12-01 Thread Sudeep Holla
Hi Quentin,

On Wed, Nov 16, 2022 at 05:03:25PM +, Quentin Perret wrote:
> From: Will Deacon 
> 
> This is consistent with the other comments in the struct.
>
Not sure how that happened :). Anyways,

Reviewed-by: Sudeep Holla 

I am yet to look at the other patches and I would like to have a setup
to test it as well. So I will look at the other patches and test it later.
The reason for reviewing the first 2 patches moving the code out of the
driver is to check if they can be merged for v6.2 itself.

I may start pushing FF-A v1.1 changes for v6.3 and trying to avoid conflicts
or cross tree dependencies. I know it is quite late for v6.2 but these changes
are trivial and good to get it in for v6.2 if possible.

Will, thoughts ? If you agree, please take it via arm64 for v6.2. I don't
have any FF-A changes for v6.2 ATM, so there should be not conflicts.

--
Regards,
Sudeep
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 01/12] firmware: arm_ffa: Move constants to header file

2022-12-01 Thread Sudeep Holla
On Wed, Nov 16, 2022 at 05:03:24PM +, Quentin Perret wrote:
> From: Will Deacon 
> 
> FF-A function IDs and error codes will be needed in the hypervisor too,
> so move to them to the header file where they can be shared. Rename the
> version constants with an "FFA_" prefix so that they are less likely
> to clash with other code in the tree.
>

Reviewed-by: Sudeep Holla 

> Co-developed-by: Andrew Walbran 
> Signed-off-by: Andrew Walbran 
> Signed-off-by: Will Deacon 
> Signed-off-by: Quentin Perret 
> ---
>  drivers/firmware/arm_ffa/driver.c | 101 +++---
>  include/linux/arm_ffa.h   |  83 
>  2 files changed, 93 insertions(+), 91 deletions(-)
> 
> diff --git a/drivers/firmware/arm_ffa/driver.c 
> b/drivers/firmware/arm_ffa/driver.c
> index d5e86ef40b89..fa85c64d3ded 100644
> --- a/drivers/firmware/arm_ffa/driver.c
> +++ b/drivers/firmware/arm_ffa/driver.c
> @@ -36,81 +36,6 @@
>  #include "common.h"
>  
>  #define FFA_DRIVER_VERSION   FFA_VERSION_1_0
> -
> -#define FFA_SMC(calling_convention, func_num)
> \
> - ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, (calling_convention),   \
> -ARM_SMCCC_OWNER_STANDARD, (func_num))
> -
> -#define FFA_SMC_32(func_num) FFA_SMC(ARM_SMCCC_SMC_32, (func_num))
> -#define FFA_SMC_64(func_num) FFA_SMC(ARM_SMCCC_SMC_64, (func_num))
> -
> -#define FFA_ERRORFFA_SMC_32(0x60)
> -#define FFA_SUCCESS  FFA_SMC_32(0x61)
> -#define FFA_INTERRUPTFFA_SMC_32(0x62)
> -#define FFA_VERSION  FFA_SMC_32(0x63)
> -#define FFA_FEATURES FFA_SMC_32(0x64)
> -#define FFA_RX_RELEASE   FFA_SMC_32(0x65)
> -#define FFA_RXTX_MAP FFA_SMC_32(0x66)
> -#define FFA_FN64_RXTX_MAPFFA_SMC_64(0x66)
> -#define FFA_RXTX_UNMAP   FFA_SMC_32(0x67)
> -#define FFA_PARTITION_INFO_GET   FFA_SMC_32(0x68)
> -#define FFA_ID_GET   FFA_SMC_32(0x69)
> -#define FFA_MSG_POLL FFA_SMC_32(0x6A)
> -#define FFA_MSG_WAIT FFA_SMC_32(0x6B)
> -#define FFA_YIELDFFA_SMC_32(0x6C)
> -#define FFA_RUN  FFA_SMC_32(0x6D)
> -#define FFA_MSG_SEND FFA_SMC_32(0x6E)
> -#define FFA_MSG_SEND_DIRECT_REQ  FFA_SMC_32(0x6F)
> -#define FFA_FN64_MSG_SEND_DIRECT_REQ FFA_SMC_64(0x6F)
> -#define FFA_MSG_SEND_DIRECT_RESP FFA_SMC_32(0x70)
> -#define FFA_FN64_MSG_SEND_DIRECT_RESPFFA_SMC_64(0x70)
> -#define FFA_MEM_DONATE   FFA_SMC_32(0x71)
> -#define FFA_FN64_MEM_DONATE  FFA_SMC_64(0x71)
> -#define FFA_MEM_LEND FFA_SMC_32(0x72)
> -#define FFA_FN64_MEM_LENDFFA_SMC_64(0x72)
> -#define FFA_MEM_SHAREFFA_SMC_32(0x73)
> -#define FFA_FN64_MEM_SHARE   FFA_SMC_64(0x73)
> -#define FFA_MEM_RETRIEVE_REQ FFA_SMC_32(0x74)
> -#define FFA_FN64_MEM_RETRIEVE_REQFFA_SMC_64(0x74)
> -#define FFA_MEM_RETRIEVE_RESPFFA_SMC_32(0x75)
> -#define FFA_MEM_RELINQUISH   FFA_SMC_32(0x76)
> -#define FFA_MEM_RECLAIM  FFA_SMC_32(0x77)
> -#define FFA_MEM_OP_PAUSE FFA_SMC_32(0x78)
> -#define FFA_MEM_OP_RESUMEFFA_SMC_32(0x79)
> -#define FFA_MEM_FRAG_RX  FFA_SMC_32(0x7A)
> -#define FFA_MEM_FRAG_TX  FFA_SMC_32(0x7B)
> -#define FFA_NORMAL_WORLD_RESUME  FFA_SMC_32(0x7C)
> -
> -/*
> - * For some calls it is necessary to use SMC64 to pass or return 64-bit 
> values.
> - * For such calls FFA_FN_NATIVE(name) will choose the appropriate
> - * (native-width) function ID.
> - */
> -#ifdef CONFIG_64BIT
> -#define FFA_FN_NATIVE(name)  FFA_FN64_##name
> -#else
> -#define FFA_FN_NATIVE(name)  FFA_##name
> -#endif
> -
> -/* FFA error codes. */
> -#define FFA_RET_SUCCESS(0)
> -#define FFA_RET_NOT_SUPPORTED  (-1)
> -#define FFA_RET_INVALID_PARAMETERS (-2)
> -#define FFA_RET_NO_MEMORY  (-3)
> -#define FFA_RET_BUSY   (-4)
> -#define FFA_RET_INTERRUPTED(-5)
> -#define FFA_RET_DENIED (-6)
> -#define FFA_RET_RETRY  (-7)
> -#define FFA_RET_ABORTED(-8)
> -
> -#define MAJOR_VERSION_MASK   GENMASK(30, 16)
> -#define MINOR_VERSION_MASK   GENMASK(15, 0)
> -#define MAJOR_VERSION(x) ((u16)(FIELD_GET(MAJOR_VERSION_MASK, (x
> -#define MINOR_VERSION(x) ((u16)(FIELD_GET(MINOR_VERSION_MASK, (x
> -#define PACK_VERSION_INFO(major, minor)  \
> - (FIELD_PREP(MA

Re: [PATCH 1/2] ACPI/AEST: Initial AEST driver

2021-12-16 Thread Sudeep Holla
On Thu, Dec 16, 2021 at 05:05:15PM -0500, Tyler Baicar wrote:
> -Moved ACPI for ARM64 maintainers to "to:"
> 
> Hi Marc, Darren,
> 
> On 11/30/2021 11:41 AM, Darren Hart wrote:
> > On Tue, Nov 30, 2021 at 09:45:46AM +, Marc Zyngier wrote:
> > > Hi Darren,
> > > 
> > > On Mon, 29 Nov 2021 20:39:23 +,
> > > Darren Hart  wrote:
> > > > On Wed, Nov 24, 2021 at 06:09:14PM +, Marc Zyngier wrote:
> > > > > On Wed, 24 Nov 2021 17:07:07 +,
> > > > > > diff --git a/MAINTAINERS b/MAINTAINERS
> > > > > > index 5250298d2817..aa0483726606 100644
> > > > > > --- a/MAINTAINERS
> > > > > > +++ b/MAINTAINERS
> > > > > > @@ -382,6 +382,7 @@ ACPI FOR ARM64 (ACPI/arm64)
> > > > > >   M:Lorenzo Pieralisi 
> > > > > >   M:Hanjun Guo 
> > > > > >   M:Sudeep Holla 
> > > > > > +R: Tyler Baicar 
> > > > > >   L:linux-a...@vger.kernel.org
> > > > > >   L:linux-arm-ker...@lists.infradead.org (moderated for 
> > > > > > non-subscribers)
> > > > > >   S:Maintained
> > > > > Isn't this a bit premature? This isn't even mentioned in the commit
> > > > > message, only in passing in the cover letter.
> > > > > 
> > > > Hi Marc,
> > > > 
> > > > This was something I encouraged Tyler to add during internal review,
> > > > both in response to the checkpatch.pl warning about adding new drivers
> > > > as well as our interest in reviewing any future changes to the aest
> > > > driver. Since refactoring is common, this level made sense to me - but
> > > > would it be preferable to add a new entry for just the new driver Tyler
> > > > added?
> > > Adding someone as the co-maintainer/co-reviewer of a whole subsystem
> > > (ACPI/arm64 in this case) comes, IMO, with a number of pre-requisites:
> > > has the proposed co-{maintainer,reviewer} contributed and/or reviewed
> > > a significant number of patches to that subsystem and/or actively
> > > participated in the public discussions on the design and the
> > > maintenance of the subsystem, so that their reviewing is authoritative
> > > enough? I won't be judge of this, but it is definitely something to
> > > consider.
> > Hi Marc,
> > 
> > Agreed. I applied similar criteria when considering sub maintainers for
> > the platform/x86 subsystem while I maintained it.
> > 
> > > I don't think preemptively adding someone to the MAINTAINERS entry to
> > > indicate an interest in a whole subsystem is the right way to do it.
> > > One could argue that this is what a mailing list is for! ;-) On the
> > > other hand, an active participation to the review process is the
> > > perfect way to engage with fellow developers and to grow a profile. It
> > > is at this stage that adding oneself as an upstream reviewer makes a
> > > lot of sense.
> > Also generally agree. In this specific case, our interest was in the
> > driver itself, and we had to decide between the whole subsystem or
> > adding another F: entry in MAINTAINERS for the specific driver. Since
> > drivers/acpi/arm64 only has 3 .c files in it, adding another entry
> > seemed premature and overly granular. Certainly a subjective thing and
> > we have no objection to adding the extra line if that's preferred. This
> > should have been noted in the commit message.
> 
> Thank you for the feedback here, I will make sure to add this to the commit
> message and cover letter in the next version.

Hi Marc,

Thanks for responding and providing all the necessary details.

> 
> Hi Lorenzo, Hanjun, Sudeep,
> 
> As for adding myself as a reviewer under ACPI for ARM64 or adding another F:
> entry, do you have a preference or guidance on what I should do here?
>

I prefer to start with an entry specific to the $subject driver for all
the reasons Marc has already stated. It may also add confusion and provide
misleading reference to others who want to maintain specific drivers like
this in the future. Further it will result in this list to grow even though
not all in that will be interested in reviewing or maintaining ARM64
ACPI subsystem if we take the approach in this patch and more confusion
to the developers.

Ofcourse if you are interested and get engaged in the review of ARM64
ACPI in the future we can always revisit and update accordingly.

Hope this helps and provides clarification you are looking for.

-- 
Regards,
Sudeep
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v4 2/5] firmware: smccc: Introduce SMCCC TRNG framework

2020-12-11 Thread Sudeep Holla
On Fri, Dec 11, 2020 at 04:00:02PM +, Andre Przywara wrote:
> The ARM DEN0098 document describe an SMCCC based firmware service to
> deliver hardware generated random numbers. Its existence is advertised
> according to the SMCCC v1.1 specification.
> 
> Add a (dummy) call to probe functions implemented in each architecture
> (ARM and arm64), to determine the existence of this interface.
> For now this return false, but this will be overwritten by each
> architecture's support patch.
> 
> Signed-off-by: Andre Przywara 
> Reviewed-by: Linus Walleij 

Reviewed-by: Sudeep Holla 

-- 
Regards,
Sudeep
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v4 1/5] firmware: smccc: Add SMCCC TRNG function call IDs

2020-12-11 Thread Sudeep Holla
On Fri, Dec 11, 2020 at 04:00:01PM +, Andre Przywara wrote:
> From: Ard Biesheuvel 
> 
> The ARM architected TRNG firmware interface, described in ARM spec
> DEN0098, define an ARM SMCCC based interface to a true random number
> generator, provided by firmware.
> 
> Add the definitions of the SMCCC functions as defined by the spec.
> 
> Signed-off-by: Ard Biesheuvel 
> Signed-off-by: Andre Przywara 
> Reviewed-by: Linus Walleij 

Reviewed-by: Sudeep Holla 

-- 
Regards,
Sudeep
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v3 19/23] kvm: arm64: Intercept host's CPU_ON SMCs

2020-11-27 Thread Sudeep Holla
On Thu, Nov 26, 2020 at 03:54:17PM +, David Brazdil wrote:
> Add a handler of the CPU_ON PSCI call from host. When invoked, it looks
> up the logical CPU ID corresponding to the provided MPIDR and populates
> the state struct of the target CPU with the provided x0, pc. It then
> calls CPU_ON itself, with an entry point in hyp that initializes EL2
> state before returning ERET to the provided PC in EL1.
> 
> There is a simple atomic lock around the boot args struct. If it is
> already locked, CPU_ON will return PENDING_ON error code.
> 
> Signed-off-by: David Brazdil 
> ---
>  arch/arm64/kvm/hyp/nvhe/hyp-init.S   |  30 
>  arch/arm64/kvm/hyp/nvhe/psci-relay.c | 109 +++
>  2 files changed, 139 insertions(+)
> 
> diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-init.S 
> b/arch/arm64/kvm/hyp/nvhe/hyp-init.S
> index 98ce40e17b42..ea71f653af55 100644
> --- a/arch/arm64/kvm/hyp/nvhe/hyp-init.S
> +++ b/arch/arm64/kvm/hyp/nvhe/hyp-init.S
> @@ -9,6 +9,7 @@
>  
>  #include 
>  #include 
> +#include 
>  #include 
>  #include 
>  #include 
> @@ -161,6 +162,35 @@ alternative_else_nop_endif
>   ret
>  SYM_CODE_END(___kvm_hyp_init)
>  
> +SYM_CODE_START(__kvm_hyp_cpu_on_entry)
> + msr SPsel, #1   // We want to use SP_EL{1,2}
> +
> + /* Check that the core was booted in EL2. */
> + mrs x1, CurrentEL
> + cmp x1, #CurrentEL_EL2
> + b.eq2f
> +
> + /* The core booted in EL1. KVM cannot be initialized on it. */
> +1:   wfe
> + wfi
> + b   1b
> +
> + /* Initialize EL2 CPU state to sane values. */
> +2:   mov x29, x0
> + init_el2_state nvhe
> + mov x0, x29
> +
> + /* Enable MMU, set vectors and stack. */
> + bl  ___kvm_hyp_init
> +
> + /* Load address of the C handler. */
> + ldr x1, =__kvm_hyp_psci_cpu_entry
> + kimg_hyp_va x1, x2
> +
> + /* Leave idmap. */
> + br  x1
> +SYM_CODE_END(__kvm_hyp_cpu_on_entry)
> +
>  SYM_CODE_START(__kvm_handle_stub_hvc)
>   cmp x0, #HVC_SOFT_RESTART
>   b.ne1f
> diff --git a/arch/arm64/kvm/hyp/nvhe/psci-relay.c 
> b/arch/arm64/kvm/hyp/nvhe/psci-relay.c
> index 7aa87ab7f5ce..39e507672e6e 100644
> --- a/arch/arm64/kvm/hyp/nvhe/psci-relay.c
> +++ b/arch/arm64/kvm/hyp/nvhe/psci-relay.c
> @@ -9,12 +9,17 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>  #include 
>  #include 
>  
>  #include 
>  
> +extern char __kvm_hyp_cpu_on_entry[];
> +
> +void __noreturn __host_enter(struct kvm_cpu_context *host_ctxt);
> +
>  /* Config options set by the host. */
>  u32 __ro_after_init kvm_host_psci_version;
>  u32 __ro_after_init kvm_host_psci_function_id[PSCI_FN_MAX];
> @@ -22,6 +27,19 @@ s64 __ro_after_init hyp_physvirt_offset;
>  
>  #define __hyp_pa(x) ((phys_addr_t)((x)) + hyp_physvirt_offset)
>  
> +#define INVALID_CPU_ID   UINT_MAX
> +
> +#define CPU_UNLOCKED 0
> +#define CPU_LOCKED   1
> +
> +struct cpu_boot_args {
> + unsigned long pc;
> + unsigned long r0;
> +};
> +
> +static DEFINE_PER_CPU(atomic_t, cpu_on_lock) = ATOMIC_INIT(0);
> +static DEFINE_PER_CPU(struct cpu_boot_args, cpu_on_args);
> +
>  static u64 get_psci_func_id(struct kvm_cpu_context *host_ctxt)
>  {
>   DECLARE_REG(u64, func_id, host_ctxt, 0);
> @@ -78,10 +96,99 @@ static __noreturn unsigned long 
> psci_forward_noreturn(struct kvm_cpu_context *ho
>   hyp_panic(); /* unreachable */
>  }
>
> +static unsigned int find_cpu_id(u64 mpidr)
> +{
> + unsigned int i;
> +
> + /* Reject invalid MPIDRs */
> + if (mpidr & ~MPIDR_HWID_BITMASK)
> + return INVALID_CPU_ID;
> +
> + for (i = 0; i < NR_CPUS; i++) {

I may not have understood the flow correctly, so just asking:
This is just called for secondaries on boot right ? And the cpumasks
are setup by then ? Just trying to see if we can use cpu_possible_mask
instead of running through all 256/1k/4k cpus(ofcourse based on NR_CPUS
config)

--
Regards,
Sudeep
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v3 16/23] kvm: arm64: Forward safe PSCI SMCs coming from host

2020-11-27 Thread Sudeep Holla
On Thu, Nov 26, 2020 at 03:54:14PM +, David Brazdil wrote:
> Forward the following PSCI SMCs issued by host to EL3 as they do not
> require the hypervisor's intervention. This assumes that EL3 correctly
> implements the PSCI specification.
> 
> Only function IDs implemented in Linux are included.
> 
> Where both 32-bit and 64-bit variants exist, it is assumed that the host
> will always use the 64-bit variant.
> 
>  * SMCs that only return information about the system
>* PSCI_VERSION- PSCI version implemented by EL3
>* PSCI_FEATURES   - optional features supported by EL3
>* AFFINITY_INFO   - power state of core/cluster
>* MIGRATE_INFO_TYPE   - whether Trusted OS can be migrated
>* MIGRATE_INFO_UP_CPU - resident core of Trusted OS
>  * operations which do not affect the hypervisor
>* MIGRATE - migrate Trusted OS to a different core
>* SET_SUSPEND_MODE- toggle OS-initiated mode
>  * system shutdown/reset
>* SYSTEM_OFF
>* SYSTEM_RESET
>* SYSTEM_RESET2
> 
> Signed-off-by: David Brazdil 
> ---
>  arch/arm64/kvm/hyp/nvhe/psci-relay.c | 43 +++-
>  1 file changed, 42 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kvm/hyp/nvhe/psci-relay.c 
> b/arch/arm64/kvm/hyp/nvhe/psci-relay.c
> index e7091d89f0fc..7aa87ab7f5ce 100644
> --- a/arch/arm64/kvm/hyp/nvhe/psci-relay.c
> +++ b/arch/arm64/kvm/hyp/nvhe/psci-relay.c
> @@ -57,14 +57,51 @@ static bool is_psci_call(u64 func_id)
>   }
>  }
>  
> +static unsigned long psci_call(unsigned long fn, unsigned long arg0,
> +unsigned long arg1, unsigned long arg2)
> +{
> + struct arm_smccc_res res;
> +
> + arm_smccc_1_1_smc(fn, arg0, arg1, arg2, );
> + return res.a0;
> +}
> +
> +static unsigned long psci_forward(struct kvm_cpu_context *host_ctxt)
> +{
> + return psci_call(cpu_reg(host_ctxt, 0), cpu_reg(host_ctxt, 1),
> +  cpu_reg(host_ctxt, 2), cpu_reg(host_ctxt, 3));
> +}
> +
> +static __noreturn unsigned long psci_forward_noreturn(struct kvm_cpu_context 
> *host_ctxt)
> +{
> + psci_forward(host_ctxt);
> + hyp_panic(); /* unreachable */
> +}
> +
>  static unsigned long psci_0_1_handler(u64 func_id, struct kvm_cpu_context 
> *host_ctxt)
>  {
> - return PSCI_RET_NOT_SUPPORTED;
> + if (func_id == kvm_host_psci_function_id[PSCI_FN_CPU_OFF])
> + return psci_forward(host_ctxt);
> + else if (func_id == kvm_host_psci_function_id[PSCI_FN_MIGRATE])
> + return psci_forward(host_ctxt);

Looks weird or I am not seeing something right ? Same action for both
right ? Can't they be combined ?

-- 
Regards,
Sudeep
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v3 06/23] kvm: arm64: Add kvm-arm.protected early kernel parameter

2020-11-27 Thread Sudeep Holla
On Thu, Nov 26, 2020 at 03:54:04PM +, David Brazdil wrote:
> Add an early parameter that allows users to opt into protected KVM mode
> when using the nVHE hypervisor. In this mode, guest state will be kept
> private from the host. This will primarily involve enabling stage-2
> address translation for the host, restricting DMA to host memory, and
> filtering host SMCs.
> 
> Capability ARM64_PROTECTED_KVM is set if the param is passed, CONFIG_KVM
> is enabled and the kernel was not booted with VHE.
> 
> Signed-off-by: David Brazdil 
> ---
>  .../admin-guide/kernel-parameters.txt |  5 
>  arch/arm64/include/asm/cpucaps.h  |  3 +-
>  arch/arm64/include/asm/virt.h |  8 +
>  arch/arm64/kernel/cpufeature.c| 29 +++
>  arch/arm64/kvm/arm.c  |  4 ++-
>  5 files changed, 47 insertions(+), 2 deletions(-)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt 
> b/Documentation/admin-guide/kernel-parameters.txt
> index 526d65d8573a..06c89975c29c 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -2259,6 +2259,11 @@
>   for all guests.
>   Default is 1 (enabled) if in 64-bit or 32-bit PAE mode.
>  
> + kvm-arm.protected=
> + [KVM,ARM] Allow spawning protected guests whose state
> + is kept private from the host. Only valid for non-VHE.
> + Default is 0 (disabled).
> +

Sorry for being pedantic. Can we reword this to say valid for
!CONFIG_ARM64_VHE ? I read this as valid only for non-VHE hardware, it may
be just me, but if you agree please update so that it doesn't give remote
idea that it is not valid on VHE enabled hardware.

I was trying to run this on the hardware and was trying to understand the
details on how to do that.

-- 
Regards,
Sudeep
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [RFC PATCH v12 03/11] psci: export smccc conduit get helper.

2020-05-26 Thread Sudeep Holla
On Mon, May 25, 2020 at 01:37:56AM +, Jianyong Wu wrote:
> Hi Sudeep,
> 
> > -Original Message-
> > From: Sudeep Holla 
> > Sent: Friday, May 22, 2020 9:12 PM
> > To: Jianyong Wu 
> > Cc: net...@vger.kernel.org; yangbo...@nxp.com; john.stu...@linaro.org;
> > t...@linutronix.de; pbonz...@redhat.com; sean.j.christopher...@intel.com;
> > m...@kernel.org; richardcoch...@gmail.com; Mark Rutland
> > ; w...@kernel.org; Suzuki Poulose
> > ; Steven Price ; Justin
> > He ; Wei Chen ;
> > k...@vger.kernel.org; Steve Capper ; linux-
> > ker...@vger.kernel.org; Kaly Xin ; nd ;
> > Sudeep Holla ; kvmarm@lists.cs.columbia.edu;
> > linux-arm-ker...@lists.infradead.org
> > Subject: Re: [RFC PATCH v12 03/11] psci: export smccc conduit get helper.
> > 
> > On Fri, May 22, 2020 at 04:37:16PM +0800, Jianyong Wu wrote:
> > > Export arm_smccc_1_1_get_conduit then modules can use smccc helper
> > > which adopts it.
> > >
> > > Acked-by: Mark Rutland 
> > > Signed-off-by: Jianyong Wu 
> > > ---
> > >  drivers/firmware/psci/psci.c | 1 +
> > >  1 file changed, 1 insertion(+)
> > >
> > > diff --git a/drivers/firmware/psci/psci.c
> > > b/drivers/firmware/psci/psci.c index 2937d44b5df4..fd3c88f21b6a 100644
> > > --- a/drivers/firmware/psci/psci.c
> > > +++ b/drivers/firmware/psci/psci.c
> > > @@ -64,6 +64,7 @@ enum arm_smccc_conduit
> > > arm_smccc_1_1_get_conduit(void)
> > >
> > >   return psci_ops.conduit;
> > >  }
> > > +EXPORT_SYMBOL(arm_smccc_1_1_get_conduit);
> > >
> > 
> > I have moved this into drivers/firmware/smccc/smccc.c [1] Please update
> > this accordingly.
> 
> Ok, I will remove this patch next version.

You may need it still, just that this patch won't apply as the function
is moved to a new file.

-- 
Regards,
Sudeep
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [RFC PATCH v12 03/11] psci: export smccc conduit get helper.

2020-05-22 Thread Sudeep Holla
On Fri, May 22, 2020 at 04:37:16PM +0800, Jianyong Wu wrote:
> Export arm_smccc_1_1_get_conduit then modules can use smccc helper which
> adopts it.
>
> Acked-by: Mark Rutland 
> Signed-off-by: Jianyong Wu 
> ---
>  drivers/firmware/psci/psci.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/drivers/firmware/psci/psci.c b/drivers/firmware/psci/psci.c
> index 2937d44b5df4..fd3c88f21b6a 100644
> --- a/drivers/firmware/psci/psci.c
> +++ b/drivers/firmware/psci/psci.c
> @@ -64,6 +64,7 @@ enum arm_smccc_conduit arm_smccc_1_1_get_conduit(void)
>
>   return psci_ops.conduit;
>  }
> +EXPORT_SYMBOL(arm_smccc_1_1_get_conduit);
>

I have moved this into drivers/firmware/smccc/smccc.c [1]
Please update this accordingly.

Also this series is floating on the list for a while now, it is time to
drop "RFC" unless anyone has strong objection to the idea here.

--
Regards,
Sudeep

[1] https://git.kernel.org/arm64/c/f2ae97062a48
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v2 05/15] arm64: KVM: add access handler for SPE system registers

2019-05-24 Thread Sudeep Holla
On Fri, May 24, 2019 at 12:36:24PM +0100, Julien Thierry wrote:
> Hi Sudeep,
> 
> On 23/05/2019 11:34, Sudeep Holla wrote:
> > SPE Profiling Buffer owning EL is configurable and when MDCR_EL2.E2PB
> > is configured to provide buffer ownership to EL1, the control registers
> > are trapped.
> > 
> > Add access handlers for the Statistical Profiling Extension(SPE)
> > Profiling Buffer controls registers. This is need to support profiling
> > using SPE in the guests.
> > 
> > Signed-off-by: Sudeep Holla 
> > ---
> >  arch/arm64/include/asm/kvm_host.h | 13 
> >  arch/arm64/kvm/sys_regs.c | 35 +++
> >  include/kvm/arm_spe.h | 15 +
> >  3 files changed, 63 insertions(+)
> > 
> > diff --git a/arch/arm64/include/asm/kvm_host.h 
> > b/arch/arm64/include/asm/kvm_host.h
> > index 611a4884fb6c..559aa6931291 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -147,6 +147,19 @@ enum vcpu_sysreg {
> > MDCCINT_EL1,/* Monitor Debug Comms Channel Interrupt Enable Reg */
> > DISR_EL1,   /* Deferred Interrupt Status Register */
> >  
> > +   /* Statistical Profiling Extension Registers */
> > +
> > +   PMSCR_EL1,
> > +   PMSICR_EL1,
> > +   PMSIRR_EL1,
> > +   PMSFCR_EL1,
> > +   PMSEVFR_EL1,
> > +   PMSLATFR_EL1,
> > +   PMSIDR_EL1,
> > +   PMBLIMITR_EL1,
> > +   PMBPTR_EL1,
> > +   PMBSR_EL1,
> > +
> > /* Performance Monitors Registers */
> > PMCR_EL0,   /* Control Register */
> > PMSELR_EL0, /* Event Counter Selection Register */
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 857b226bcdde..dbf5056828d3 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -646,6 +646,30 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const 
> > struct sys_reg_desc *r)
> > __vcpu_sys_reg(vcpu, PMCR_EL0) = val;
> >  }
> >  
> > +static bool access_pmsb_val(struct kvm_vcpu *vcpu, struct sys_reg_params 
> > *p,
> > +   const struct sys_reg_desc *r)
> > +{
> > +   if (p->is_write)
> > +   vcpu_write_sys_reg(vcpu, p->regval, r->reg);
> > +   else
> > +   p->regval = vcpu_read_sys_reg(vcpu, r->reg);
> > +
> > +   return true;
> > +}
> > +
> > +static void reset_pmsb_val(struct kvm_vcpu *vcpu, const struct 
> > sys_reg_desc *r)
> > +{
> > +   if (!kvm_arm_support_spe_v1()) {
> > +   __vcpu_sys_reg(vcpu, r->reg) = 0;
> > +   return;
> > +   }
> > +
> > +   if (r->reg == PMSIDR_EL1)
> 
> If only PMSIDR_EL1 has a non-zero reset value, it feels a bit weird to
> share the reset function for all these registers.
>

Ah, right. Initially I did have couple of other registers which were not
needed. So I removed them without observing that I could have just used
reset_val(0) for all except PMSIDR_EL1.

> I would suggest only having a reset_pmsidr() function, and just use
> reset_val() with sys_reg_desc->val set to 0 for all the others.
>

Thanks for pointing this out.

--
Regards,
Sudeep


Re: [PATCH v2 12/15] KVM: arm64: add a new vcpu device control group for SPEv1

2019-05-24 Thread Sudeep Holla
On Fri, May 24, 2019 at 11:37:51AM +0100, Marc Zyngier wrote:
> Hi Sudeep,
> 
> On 23/05/2019 11:34, Sudeep Holla wrote:
> > To configure the virtual SPEv1 overflow interrupt number, we use the
> > vcpu kvm_device ioctl, encapsulating the KVM_ARM_VCPU_SPE_V1_IRQ
> > attribute within the KVM_ARM_VCPU_SPE_V1_CTRL group.
> > 
> > After configuring the SPEv1, call the vcpu ioctl with attribute
> > KVM_ARM_VCPU_SPE_V1_INIT to initialize the SPEv1.
> > 
> > Signed-off-by: Sudeep Holla 
> > ---
> >  Documentation/virtual/kvm/devices/vcpu.txt |  28 
> >  arch/arm64/include/asm/kvm_host.h  |   2 +-
> >  arch/arm64/include/uapi/asm/kvm.h  |   4 +
> >  arch/arm64/kvm/Makefile|   1 +
> >  arch/arm64/kvm/guest.c |   9 ++
> >  arch/arm64/kvm/reset.c |   3 +
> >  include/kvm/arm_spe.h  |  35 +
> >  include/uapi/linux/kvm.h   |   1 +
> >  virt/kvm/arm/arm.c |   1 +
> >  virt/kvm/arm/spe.c | 163 +
> >  10 files changed, 246 insertions(+), 1 deletion(-)
> >  create mode 100644 virt/kvm/arm/spe.c
> > 
> > diff --git a/Documentation/virtual/kvm/devices/vcpu.txt 
> > b/Documentation/virtual/kvm/devices/vcpu.txt
> > index 2b5dab16c4f2..d1ece488aeee 100644
> > --- a/Documentation/virtual/kvm/devices/vcpu.txt
> > +++ b/Documentation/virtual/kvm/devices/vcpu.txt
> > @@ -60,3 +60,31 @@ time to use the number provided for a given timer, 
> > overwriting any previously
> >  configured values on other VCPUs.  Userspace should configure the interrupt
> >  numbers on at least one VCPU after creating all VCPUs and before running 
> > any
> >  VCPUs.
> > +
> > +3. GROUP: KVM_ARM_VCPU_SPE_V1_CTRL
> > +Architectures: ARM64
> > +
> > +1.1. ATTRIBUTE: KVM_ARM_VCPU_SPE_V1_IRQ
> > +Parameters: in kvm_device_attr.addr the address for SPE buffer overflow 
> > interrupt
> > +   is a pointer to an int
> > +Returns: -EBUSY: The SPE overflow interrupt is already set
> > + -ENXIO: The overflow interrupt not set when attempting to get it
> > + -ENODEV: SPEv1 not supported
> > + -EINVAL: Invalid SPE overflow interrupt number supplied or
> > +  trying to set the IRQ number without using an in-kernel
> > +  irqchip.
> > +
> > +A value describing the SPEv1 (Statistical Profiling Extension v1) overflow
> > +interrupt number for this vcpu. This interrupt should be PPI and the 
> > interrupt
> > +type and number must be same for each vcpu.
> > +
> > +1.2 ATTRIBUTE: KVM_ARM_VCPU_SPE_V1_INIT
> > +Parameters: no additional parameter in kvm_device_attr.addr
> > +Returns: -ENODEV: SPEv1 not supported or GIC not initialized
> > + -ENXIO: SPEv1 not properly configured or in-kernel irqchip not
> > + configured as required prior to calling this attribute
> > + -EBUSY: SPEv1 already initialized
> > +
> > +Request the initialization of the SPEv1.  If using the SPEv1 with an 
> > in-kernel
> > +virtual GIC implementation, this must be done after initializing the 
> > in-kernel
> > +irqchip.
> > diff --git a/arch/arm64/include/asm/kvm_host.h 
> > b/arch/arm64/include/asm/kvm_host.h
> > index 6921fdfd477b..fc4ead0774b3 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -50,7 +50,7 @@
> >  
> >  #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
> >  
> > -#define KVM_VCPU_MAX_FEATURES 7
> > +#define KVM_VCPU_MAX_FEATURES 8
> >  
> >  #define KVM_REQ_SLEEP \
> > KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
> > diff --git a/arch/arm64/include/uapi/asm/kvm.h 
> > b/arch/arm64/include/uapi/asm/kvm.h
> > index 7b7ac0f6cec9..4c9e168de896 100644
> > --- a/arch/arm64/include/uapi/asm/kvm.h
> > +++ b/arch/arm64/include/uapi/asm/kvm.h
> > @@ -106,6 +106,7 @@ struct kvm_regs {
> >  #define KVM_ARM_VCPU_SVE   4 /* enable SVE for this CPU */
> >  #define KVM_ARM_VCPU_PTRAUTH_ADDRESS   5 /* VCPU uses address 
> > authentication */
> >  #define KVM_ARM_VCPU_PTRAUTH_GENERIC   6 /* VCPU uses generic 
> > authentication */
> > +#define KVM_ARM_VCPU_SPE_V17 /* Support guest SPEv1 */
> >  
> >  struct kvm_vcpu_init {
> > __u32 target;
> > @@ -306,6 +307,9 @@ struct kvm_vcpu_events {
> >  #define KVM_ARM_VCPU_TIMER_CTRL1
> >  #

[PATCH v2 06/15] arm64: KVM/VHE: enable the use PMSCR_EL12 on VHE systems

2019-05-23 Thread Sudeep Holla
Currently, we are just using PMSCR_EL1 in the host for non VHE systems.
We already have the {read,write}_sysreg_el*() accessors for accessing
particular ELs' sysregs in the presence of VHE.

Lets just define PMSCR_EL12 and start making use of it here which will
access the right register on both VHE and non VHE systems. This change
is required to add SPE guest support on VHE systems.

Signed-off-by: Sudeep Holla 
---
 arch/arm64/include/asm/kvm_hyp.h | 1 +
 arch/arm64/kvm/hyp/debug-sr.c| 6 +++---
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index f61378b77c9f..782955db61dd 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -103,6 +103,7 @@
 #define afsr1_EL12  sys_reg(3, 5, 5, 1, 1)
 #define esr_EL12sys_reg(3, 5, 5, 2, 0)
 #define far_EL12sys_reg(3, 5, 6, 0, 0)
+#define SYS_PMSCR_EL12  sys_reg(3, 5, 9, 9, 0)
 #define mair_EL12   sys_reg(3, 5, 10, 2, 0)
 #define amair_EL12  sys_reg(3, 5, 10, 3, 0)
 #define vbar_EL12   sys_reg(3, 5, 12, 0, 0)
diff --git a/arch/arm64/kvm/hyp/debug-sr.c b/arch/arm64/kvm/hyp/debug-sr.c
index 50009766e5e5..fa51236ebcb3 100644
--- a/arch/arm64/kvm/hyp/debug-sr.c
+++ b/arch/arm64/kvm/hyp/debug-sr.c
@@ -89,8 +89,8 @@ static void __hyp_text __debug_save_spe_nvhe(u64 *pmscr_el1)
return;
 
/* Yes; save the control register and disable data generation */
-   *pmscr_el1 = read_sysreg_s(SYS_PMSCR_EL1);
-   write_sysreg_s(0, SYS_PMSCR_EL1);
+   *pmscr_el1 = read_sysreg_el1_s(SYS_PMSCR);
+   write_sysreg_el1_s(0, SYS_PMSCR);
isb();
 
/* Now drain all buffered data to memory */
@@ -107,7 +107,7 @@ static void __hyp_text __debug_restore_spe_nvhe(u64 
pmscr_el1)
isb();
 
/* Re-enable data generation */
-   write_sysreg_s(pmscr_el1, SYS_PMSCR_EL1);
+   write_sysreg_el1_s(pmscr_el1, SYS_PMSCR);
 }
 
 static void __hyp_text __debug_save_state(struct kvm_vcpu *vcpu,
-- 
2.17.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v2 01/15] KVM: arm64: add {read, write}_sysreg_elx_s versions for new registers

2019-05-23 Thread Sudeep Holla
KVM provides {read,write}_sysreg_el1() to write to ${REG}_EL1 when we
really want to read/write to the EL1 register without any VHE register
redirection.

SPE registers are not supported by many versions of GAS. For this reason
we mostly use mrs_s macro which takes sys_reg() representation.

However these SPE registers using sys_reg representation doesn't work
well with existing {read,write}_sysreg_el1 macros. We need to add
{read,write}_sysreg_el1_s versions so cope up with them.

Signed-off-by: Sudeep Holla 
---
 arch/arm64/include/asm/kvm_hyp.h | 19 +++
 1 file changed, 19 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 09fe8bd15f6e..f61378b77c9f 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -35,6 +35,15 @@
 : "=r" (reg)); \
reg;\
})
+#define read_sysreg_elx_s(r,nvh,vh)\
+   ({  \
+   u64 reg;\
+   asm volatile(ALTERNATIVE(__mrs_s("%0", r##nvh), \
+__mrs_s("%0", r##vh),  \
+ARM64_HAS_VIRT_HOST_EXTN)  \
+: "=r" (reg)); \
+   reg;\
+   })
 
 #define write_sysreg_elx(v,r,nvh,vh)   \
do {\
@@ -44,6 +53,14 @@
 ARM64_HAS_VIRT_HOST_EXTN)  \
 : : "rZ" (__val)); \
} while (0)
+#define write_sysreg_elx_s(v,r,nvh,vh) \
+   do {\
+   u64 __val = (u64)(v);   \
+   asm volatile(ALTERNATIVE(__msr_s(r##nvh, "%x0"),\
+__msr_s(r##vh, "%x0"), \
+ARM64_HAS_VIRT_HOST_EXTN)  \
+: : "rZ" (__val)); \
+   } while (0)
 
 /*
  * Unified accessors for registers that have a different encoding
@@ -72,7 +89,9 @@
 #define read_sysreg_el0(r) read_sysreg_elx(r, _EL0, _EL02)
 #define write_sysreg_el0(v,r)  write_sysreg_elx(v, r, _EL0, _EL02)
 #define read_sysreg_el1(r) read_sysreg_elx(r, _EL1, _EL12)
+#define read_sysreg_el1_s(r)   read_sysreg_elx_s(r, _EL1, _EL12)
 #define write_sysreg_el1(v,r)  write_sysreg_elx(v, r, _EL1, _EL12)
+#define write_sysreg_el1_s(v,r)write_sysreg_elx_s(v, r, _EL1, _EL12)
 
 /* The VHE specific system registers and their encoding */
 #define sctlr_EL12  sys_reg(3, 5, 1, 0, 0)
-- 
2.17.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v2 09/15] arm64: KVM: add support to save/restore SPE profiling buffer controls

2019-05-23 Thread Sudeep Holla
Currently since we don't support profiling using SPE in the guests,
we just save the PMSCR_EL1, flush the profiling buffers and disable
sampling. However in order to support simultaneous sampling both in
the host and guests, we need to save and reatore the complete SPE
profiling buffer controls' context.

Let's add the support for the same and keep it disabled for now.
We can enable it conditionally only if guests are allowed to use
SPE.

Signed-off-by: Sudeep Holla 
---
 arch/arm64/kvm/hyp/debug-sr.c | 44 ---
 1 file changed, 35 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kvm/hyp/debug-sr.c b/arch/arm64/kvm/hyp/debug-sr.c
index a2714a5eb3e9..a4e6eaf5934f 100644
--- a/arch/arm64/kvm/hyp/debug-sr.c
+++ b/arch/arm64/kvm/hyp/debug-sr.c
@@ -66,7 +66,8 @@
default:write_debug(ptr[0], reg, 0);\
}
 
-static void __hyp_text __debug_save_spe_nvhe(struct kvm_cpu_context *ctxt)
+static void __hyp_text
+__debug_save_spe_nvhe(struct kvm_cpu_context *ctxt, bool full_ctxt)
 {
u64 reg;
 
@@ -83,22 +84,37 @@ static void __hyp_text __debug_save_spe_nvhe(struct 
kvm_cpu_context *ctxt)
if (reg & BIT(SYS_PMBIDR_EL1_P_SHIFT))
return;
 
-   /* No; is the host actually using the thing? */
-   reg = read_sysreg_s(SYS_PMBLIMITR_EL1);
-   if (!(reg & BIT(SYS_PMBLIMITR_EL1_E_SHIFT)))
+   /* Save the control register and disable data generation */
+   ctxt->sys_regs[PMSCR_EL1] = read_sysreg_el1_s(SYS_PMSCR);
+
+   if (!ctxt->sys_regs[PMSCR_EL1])
return;
 
-   /* Yes; save the control register and disable data generation */
-   ctxt->sys_regs[PMSCR_EL1] = read_sysreg_el1_s(SYS_PMSCR);
write_sysreg_el1_s(0, SYS_PMSCR);
isb();
 
/* Now drain all buffered data to memory */
psb_csync();
dsb(nsh);
+
+   if (!full_ctxt)
+   return;
+
+   ctxt->sys_regs[PMBLIMITR_EL1] = read_sysreg_s(SYS_PMBLIMITR_EL1);
+   write_sysreg_s(0, SYS_PMBLIMITR_EL1);
+   isb();
+
+   ctxt->sys_regs[PMSICR_EL1] = read_sysreg_s(SYS_PMSICR_EL1);
+   ctxt->sys_regs[PMSIRR_EL1] = read_sysreg_s(SYS_PMSIRR_EL1);
+   ctxt->sys_regs[PMSFCR_EL1] = read_sysreg_s(SYS_PMSFCR_EL1);
+   ctxt->sys_regs[PMSEVFR_EL1] = read_sysreg_s(SYS_PMSEVFR_EL1);
+   ctxt->sys_regs[PMSLATFR_EL1] = read_sysreg_s(SYS_PMSLATFR_EL1);
+   ctxt->sys_regs[PMBPTR_EL1] = read_sysreg_s(SYS_PMBPTR_EL1);
+   ctxt->sys_regs[PMBSR_EL1] = read_sysreg_s(SYS_PMBSR_EL1);
 }
 
-static void __hyp_text __debug_restore_spe_nvhe(struct kvm_cpu_context *ctxt)
+static void __hyp_text
+__debug_restore_spe_nvhe(struct kvm_cpu_context *ctxt, bool full_ctxt)
 {
if (!ctxt->sys_regs[PMSCR_EL1])
return;
@@ -107,6 +123,16 @@ static void __hyp_text __debug_restore_spe_nvhe(struct 
kvm_cpu_context *ctxt)
isb();
 
/* Re-enable data generation */
+   if (full_ctxt) {
+   write_sysreg_s(ctxt->sys_regs[PMBPTR_EL1], SYS_PMBPTR_EL1);
+   write_sysreg_s(ctxt->sys_regs[PMBLIMITR_EL1], 
SYS_PMBLIMITR_EL1);
+   write_sysreg_s(ctxt->sys_regs[PMSFCR_EL1], SYS_PMSFCR_EL1);
+   write_sysreg_s(ctxt->sys_regs[PMSEVFR_EL1], SYS_PMSEVFR_EL1);
+   write_sysreg_s(ctxt->sys_regs[PMSLATFR_EL1], SYS_PMSLATFR_EL1);
+   write_sysreg_s(ctxt->sys_regs[PMSIRR_EL1], SYS_PMSIRR_EL1);
+   write_sysreg_s(ctxt->sys_regs[PMSICR_EL1], SYS_PMSICR_EL1);
+   write_sysreg_s(ctxt->sys_regs[PMBSR_EL1], SYS_PMBSR_EL1);
+   }
write_sysreg_el1_s(ctxt->sys_regs[PMSCR_EL1], SYS_PMSCR);
 }
 
@@ -179,7 +205,7 @@ void __hyp_text __debug_restore_host_context(struct 
kvm_vcpu *vcpu)
guest_ctxt = >arch.ctxt;
 
if (!has_vhe())
-   __debug_restore_spe_nvhe(host_ctxt);
+   __debug_restore_spe_nvhe(host_ctxt, false);
 
if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
return;
@@ -203,7 +229,7 @@ void __hyp_text __debug_save_host_context(struct kvm_vcpu 
*vcpu)
 
host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
if (!has_vhe())
-   __debug_save_spe_nvhe(host_ctxt);
+   __debug_save_spe_nvhe(host_ctxt, false);
 }
 
 void __hyp_text __debug_save_guest_context(struct kvm_vcpu *vcpu)
-- 
2.17.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v2 07/15] arm64: KVM: split debug save restore across vm/traps activation

2019-05-23 Thread Sudeep Holla
If we enable profiling buffer controls at EL1 generate a trap exception
to EL2, it also changes profiling buffer to use EL1&0 stage 1 translation
regime in case of VHE. To support SPE both in the guest and host, we
need to first stop profiling and flush the profiling buffers before
we activate/switch vm or enable/disable the traps.

In prepartion to do that, lets split the debug save restore functionality
into 4 steps:
1. debug_save_host_context - saves the host context
2. debug_restore_guest_context - restore the guest context
3. debug_save_guest_context - saves the guest context
4. debug_restore_host_context - restores the host context

Lets rename existing __debug_switch_to_{host,guest} to make sure it's
aligned to the above and just add the place holders for new ones getting
added here as we need them to support SPE in guests.

Signed-off-by: Sudeep Holla 
---
 arch/arm64/include/asm/kvm_hyp.h |  6 --
 arch/arm64/kvm/hyp/debug-sr.c| 25 -
 arch/arm64/kvm/hyp/switch.c  | 12 
 3 files changed, 28 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 782955db61dd..1c5ed80fcbda 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -164,8 +164,10 @@ void sysreg_restore_guest_state_vhe(struct kvm_cpu_context 
*ctxt);
 void __sysreg32_save_state(struct kvm_vcpu *vcpu);
 void __sysreg32_restore_state(struct kvm_vcpu *vcpu);
 
-void __debug_switch_to_guest(struct kvm_vcpu *vcpu);
-void __debug_switch_to_host(struct kvm_vcpu *vcpu);
+void __debug_save_host_context(struct kvm_vcpu *vcpu);
+void __debug_restore_guest_context(struct kvm_vcpu *vcpu);
+void __debug_save_guest_context(struct kvm_vcpu *vcpu);
+void __debug_restore_host_context(struct kvm_vcpu *vcpu);
 
 void __fpsimd_save_state(struct user_fpsimd_state *fp_regs);
 void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs);
diff --git a/arch/arm64/kvm/hyp/debug-sr.c b/arch/arm64/kvm/hyp/debug-sr.c
index fa51236ebcb3..618884df1dc4 100644
--- a/arch/arm64/kvm/hyp/debug-sr.c
+++ b/arch/arm64/kvm/hyp/debug-sr.c
@@ -149,20 +149,13 @@ static void __hyp_text __debug_restore_state(struct 
kvm_vcpu *vcpu,
write_sysreg(ctxt->sys_regs[MDCCINT_EL1], mdccint_el1);
 }
 
-void __hyp_text __debug_switch_to_guest(struct kvm_vcpu *vcpu)
+void __hyp_text __debug_restore_guest_context(struct kvm_vcpu *vcpu)
 {
struct kvm_cpu_context *host_ctxt;
struct kvm_cpu_context *guest_ctxt;
struct kvm_guest_debug_arch *host_dbg;
struct kvm_guest_debug_arch *guest_dbg;
 
-   /*
-* Non-VHE: Disable and flush SPE data generation
-* VHE: The vcpu can run, but it can't hide.
-*/
-   if (!has_vhe())
-   __debug_save_spe_nvhe(>arch.host_debug_state.pmscr_el1);
-
if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
return;
 
@@ -175,7 +168,7 @@ void __hyp_text __debug_switch_to_guest(struct kvm_vcpu 
*vcpu)
__debug_restore_state(vcpu, guest_dbg, guest_ctxt);
 }
 
-void __hyp_text __debug_switch_to_host(struct kvm_vcpu *vcpu)
+void __hyp_text __debug_restore_host_context(struct kvm_vcpu *vcpu)
 {
struct kvm_cpu_context *host_ctxt;
struct kvm_cpu_context *guest_ctxt;
@@ -199,6 +192,20 @@ void __hyp_text __debug_switch_to_host(struct kvm_vcpu 
*vcpu)
vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY;
 }
 
+void __hyp_text __debug_save_host_context(struct kvm_vcpu *vcpu)
+{
+   /*
+* Non-VHE: Disable and flush SPE data generation
+* VHE: The vcpu can run, but it can't hide.
+*/
+   if (!has_vhe())
+   __debug_save_spe_nvhe(>arch.host_debug_state.pmscr_el1);
+}
+
+void __hyp_text __debug_save_guest_context(struct kvm_vcpu *vcpu)
+{
+}
+
 u32 __hyp_text __kvm_get_mdcr_el2(void)
 {
return read_sysreg(mdcr_el2);
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 9b2461138ddc..844f0dd7a7f0 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -515,6 +515,7 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
guest_ctxt = >arch.ctxt;
 
sysreg_save_host_state_vhe(host_ctxt);
+   __debug_save_host_context(vcpu);
 
/*
 * ARM erratum 1165522 requires us to configure both stage 1 and
@@ -531,7 +532,7 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
__activate_traps(vcpu);
 
sysreg_restore_guest_state_vhe(guest_ctxt);
-   __debug_switch_to_guest(vcpu);
+   __debug_restore_guest_context(vcpu);
 
__set_guest_arch_workaround_state(vcpu);
 
@@ -545,6 +546,7 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
__set_host_arch_workaround_state(vcpu);
 
sysreg_save_guest_state_vhe(guest_ctxt);
+   __debug_save_guest_context(vcpu);
 
__deactivate_traps(vcpu);
 
@@ -553,7 +555,7 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu

[PATCH v2 14/15][KVMTOOL] update_headers: Sync kvm UAPI headers with linux v5.2-rc1

2019-05-23 Thread Sudeep Holla
The local copies of the kvm user API headers are getting stale.

In preparation for some arch-specific updated, this patch reflects
a re-run of util/update_headers.sh to pull in upstream updates from
linux v5.2-rc1.

Signed-off-by: Sudeep Holla 
---
 arm/aarch64/include/asm/kvm.h | 43 +++
 include/linux/kvm.h   | 15 +--
 powerpc/include/asm/kvm.h | 48 +++
 x86/include/asm/kvm.h |  1 +
 4 files changed, 105 insertions(+), 2 deletions(-)

diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
index 97c3478ee6e7..7b7ac0f6cec9 100644
--- a/arm/aarch64/include/asm/kvm.h
+++ b/arm/aarch64/include/asm/kvm.h
@@ -35,6 +35,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #define __KVM_HAVE_GUEST_DEBUG
 #define __KVM_HAVE_IRQ_LINE
@@ -102,6 +103,9 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_EL1_32BIT 1 /* CPU running a 32bit VM */
 #define KVM_ARM_VCPU_PSCI_0_2  2 /* CPU uses PSCI v0.2 */
 #define KVM_ARM_VCPU_PMU_V33 /* Support guest PMUv3 */
+#define KVM_ARM_VCPU_SVE   4 /* enable SVE for this CPU */
+#define KVM_ARM_VCPU_PTRAUTH_ADDRESS   5 /* VCPU uses address authentication */
+#define KVM_ARM_VCPU_PTRAUTH_GENERIC   6 /* VCPU uses generic authentication */
 
 struct kvm_vcpu_init {
__u32 target;
@@ -226,6 +230,45 @@ struct kvm_vcpu_events {
 KVM_REG_ARM_FW | ((r) & 0x))
 #define KVM_REG_ARM_PSCI_VERSION   KVM_REG_ARM_FW_REG(0)
 
+/* SVE registers */
+#define KVM_REG_ARM64_SVE  (0x15 << KVM_REG_ARM_COPROC_SHIFT)
+
+/* Z- and P-regs occupy blocks at the following offsets within this range: */
+#define KVM_REG_ARM64_SVE_ZREG_BASE0
+#define KVM_REG_ARM64_SVE_PREG_BASE0x400
+#define KVM_REG_ARM64_SVE_FFR_BASE 0x600
+
+#define KVM_ARM64_SVE_NUM_ZREGS__SVE_NUM_ZREGS
+#define KVM_ARM64_SVE_NUM_PREGS__SVE_NUM_PREGS
+
+#define KVM_ARM64_SVE_MAX_SLICES   32
+
+#define KVM_REG_ARM64_SVE_ZREG(n, i)   \
+   (KVM_REG_ARM64 | KVM_REG_ARM64_SVE | KVM_REG_ARM64_SVE_ZREG_BASE | \
+KVM_REG_SIZE_U2048 |   \
+(((n) & (KVM_ARM64_SVE_NUM_ZREGS - 1)) << 5) | \
+((i) & (KVM_ARM64_SVE_MAX_SLICES - 1)))
+
+#define KVM_REG_ARM64_SVE_PREG(n, i)   \
+   (KVM_REG_ARM64 | KVM_REG_ARM64_SVE | KVM_REG_ARM64_SVE_PREG_BASE | \
+KVM_REG_SIZE_U256 |\
+(((n) & (KVM_ARM64_SVE_NUM_PREGS - 1)) << 5) | \
+((i) & (KVM_ARM64_SVE_MAX_SLICES - 1)))
+
+#define KVM_REG_ARM64_SVE_FFR(i)   \
+   (KVM_REG_ARM64 | KVM_REG_ARM64_SVE | KVM_REG_ARM64_SVE_FFR_BASE | \
+KVM_REG_SIZE_U256 |\
+((i) & (KVM_ARM64_SVE_MAX_SLICES - 1)))
+
+#define KVM_ARM64_SVE_VQ_MIN __SVE_VQ_MIN
+#define KVM_ARM64_SVE_VQ_MAX __SVE_VQ_MAX
+
+/* Vector lengths pseudo-register: */
+#define KVM_REG_ARM64_SVE_VLS  (KVM_REG_ARM64 | KVM_REG_ARM64_SVE | \
+KVM_REG_SIZE_U512 | 0x)
+#define KVM_ARM64_SVE_VLS_WORDS\
+   ((KVM_ARM64_SVE_VQ_MAX - KVM_ARM64_SVE_VQ_MIN) / 64 + 1)
+
 /* Device Control API: ARM VGIC */
 #define KVM_DEV_ARM_VGIC_GRP_ADDR  0
 #define KVM_DEV_ARM_VGIC_GRP_DIST_REGS 1
diff --git a/include/linux/kvm.h b/include/linux/kvm.h
index 6d4ea4b6c922..2fe12b40d503 100644
--- a/include/linux/kvm.h
+++ b/include/linux/kvm.h
@@ -986,8 +986,13 @@ struct kvm_ppc_resize_hpt {
 #define KVM_CAP_HYPERV_ENLIGHTENED_VMCS 163
 #define KVM_CAP_EXCEPTION_PAYLOAD 164
 #define KVM_CAP_ARM_VM_IPA_SIZE 165
-#define KVM_CAP_MANUAL_DIRTY_LOG_PROTECT 166
+#define KVM_CAP_MANUAL_DIRTY_LOG_PROTECT 166 /* Obsolete */
 #define KVM_CAP_HYPERV_CPUID 167
+#define KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 168
+#define KVM_CAP_PPC_IRQ_XIVE 169
+#define KVM_CAP_ARM_SVE 170
+#define KVM_CAP_ARM_PTRAUTH_ADDRESS 171
+#define KVM_CAP_ARM_PTRAUTH_GENERIC 172
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
@@ -1145,6 +1150,7 @@ struct kvm_dirty_tlb {
 #define KVM_REG_SIZE_U256  0x0050ULL
 #define KVM_REG_SIZE_U512  0x0060ULL
 #define KVM_REG_SIZE_U1024 0x0070ULL
+#define KVM_REG_SIZE_U2048 0x0080ULL
 
 struct kvm_reg_list {
__u64 n; /* number of regs */
@@ -1211,6 +1217,8 @@ enum kvm_device_type {
 #define KVM_DEV_TYPE_ARM_VGIC_V3   KVM_DEV_TYPE_ARM_VGIC_V3
KVM_DEV_TYPE_ARM_VGIC_ITS,
 #define KVM_DEV_TYPE_ARM_VGIC_ITS  KVM_DEV_TYPE_ARM_VGIC_ITS
+   KVM_DEV_TYPE_XIVE,
+#define KVM_DEV_TYPE_XIVE  KVM_DEV_TYPE_XIVE
KVM_DEV_TYPE_MAX,
 };
 
@@ -1434,12 +1442,15 @@ struct kvm_enc_region {
 #define KVM_GET_NESTED_STATE _IOWR(KVMIO

[PATCH v2 08/15] arm64: KVM/debug: drop pmscr_el1 and use sys_regs[PMSCR_EL1] in kvm_cpu_context

2019-05-23 Thread Sudeep Holla
kvm_cpu_context now has support to stash the complete SPE buffer control
context. We no longer need the pmscr_el1 kvm_vcpu_arch and it can be
dropped.

Signed-off-by: Sudeep Holla 
---
 arch/arm64/include/asm/kvm_host.h |  2 --
 arch/arm64/kvm/hyp/debug-sr.c | 26 +++---
 2 files changed, 15 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 559aa6931291..6921fdfd477b 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -310,8 +310,6 @@ struct kvm_vcpu_arch {
struct {
/* {Break,watch}point registers */
struct kvm_guest_debug_arch regs;
-   /* Statistical profiling extension */
-   u64 pmscr_el1;
} host_debug_state;
 
/* VGIC state */
diff --git a/arch/arm64/kvm/hyp/debug-sr.c b/arch/arm64/kvm/hyp/debug-sr.c
index 618884df1dc4..a2714a5eb3e9 100644
--- a/arch/arm64/kvm/hyp/debug-sr.c
+++ b/arch/arm64/kvm/hyp/debug-sr.c
@@ -66,19 +66,19 @@
default:write_debug(ptr[0], reg, 0);\
}
 
-static void __hyp_text __debug_save_spe_nvhe(u64 *pmscr_el1)
+static void __hyp_text __debug_save_spe_nvhe(struct kvm_cpu_context *ctxt)
 {
u64 reg;
 
/* Clear pmscr in case of early return */
-   *pmscr_el1 = 0;
+   ctxt->sys_regs[PMSCR_EL1] = 0;
 
/* SPE present on this CPU? */
if (!cpuid_feature_extract_unsigned_field(read_sysreg(id_aa64dfr0_el1),
  ID_AA64DFR0_PMSVER_SHIFT))
return;
 
-   /* Yes; is it owned by EL3? */
+   /* Yes; is it owned by higher EL? */
reg = read_sysreg_s(SYS_PMBIDR_EL1);
if (reg & BIT(SYS_PMBIDR_EL1_P_SHIFT))
return;
@@ -89,7 +89,7 @@ static void __hyp_text __debug_save_spe_nvhe(u64 *pmscr_el1)
return;
 
/* Yes; save the control register and disable data generation */
-   *pmscr_el1 = read_sysreg_el1_s(SYS_PMSCR);
+   ctxt->sys_regs[PMSCR_EL1] = read_sysreg_el1_s(SYS_PMSCR);
write_sysreg_el1_s(0, SYS_PMSCR);
isb();
 
@@ -98,16 +98,16 @@ static void __hyp_text __debug_save_spe_nvhe(u64 *pmscr_el1)
dsb(nsh);
 }
 
-static void __hyp_text __debug_restore_spe_nvhe(u64 pmscr_el1)
+static void __hyp_text __debug_restore_spe_nvhe(struct kvm_cpu_context *ctxt)
 {
-   if (!pmscr_el1)
+   if (!ctxt->sys_regs[PMSCR_EL1])
return;
 
/* The host page table is installed, but not yet synchronised */
isb();
 
/* Re-enable data generation */
-   write_sysreg_el1_s(pmscr_el1, SYS_PMSCR);
+   write_sysreg_el1_s(ctxt->sys_regs[PMSCR_EL1], SYS_PMSCR);
 }
 
 static void __hyp_text __debug_save_state(struct kvm_vcpu *vcpu,
@@ -175,14 +175,15 @@ void __hyp_text __debug_restore_host_context(struct 
kvm_vcpu *vcpu)
struct kvm_guest_debug_arch *host_dbg;
struct kvm_guest_debug_arch *guest_dbg;
 
+   host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
+   guest_ctxt = >arch.ctxt;
+
if (!has_vhe())
-   __debug_restore_spe_nvhe(vcpu->arch.host_debug_state.pmscr_el1);
+   __debug_restore_spe_nvhe(host_ctxt);
 
if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
return;
 
-   host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
-   guest_ctxt = >arch.ctxt;
host_dbg = >arch.host_debug_state.regs;
guest_dbg = kern_hyp_va(vcpu->arch.debug_ptr);
 
@@ -198,8 +199,11 @@ void __hyp_text __debug_save_host_context(struct kvm_vcpu 
*vcpu)
 * Non-VHE: Disable and flush SPE data generation
 * VHE: The vcpu can run, but it can't hide.
 */
+   struct kvm_cpu_context *host_ctxt;
+
+   host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
if (!has_vhe())
-   __debug_save_spe_nvhe(>arch.host_debug_state.pmscr_el1);
+   __debug_save_spe_nvhe(host_ctxt);
 }
 
 void __hyp_text __debug_save_guest_context(struct kvm_vcpu *vcpu)
-- 
2.17.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v2 15/15][KVMTOOL] kvm: add a vcpu feature for SPEv1 support

2019-05-23 Thread Sudeep Holla
This is a runtime configurable for KVM tool to enable Statistical
Profiling Extensions version 1 support in guest kernel. A command line
option --spe is required to use the same.

Signed-off-by: Sudeep Holla 
---
 Makefile  |  2 +-
 arm/aarch64/arm-cpu.c |  2 +
 arm/aarch64/include/asm/kvm.h |  4 ++
 arm/aarch64/include/kvm/kvm-config-arch.h |  2 +
 arm/aarch64/include/kvm/kvm-cpu-arch.h|  3 +-
 arm/include/arm-common/kvm-config-arch.h  |  1 +
 arm/include/arm-common/spe.h  |  4 ++
 arm/spe.c | 81 +++
 include/linux/kvm.h   |  1 +
 9 files changed, 98 insertions(+), 2 deletions(-)
 create mode 100644 arm/include/arm-common/spe.h
 create mode 100644 arm/spe.c

diff --git a/Makefile b/Makefile
index 9e21a4e2b419..b7c7ad8caf20 100644
--- a/Makefile
+++ b/Makefile
@@ -158,7 +158,7 @@ endif
 # ARM
 OBJS_ARM_COMMON:= arm/fdt.o arm/gic.o arm/gicv2m.o 
arm/ioport.o \
   arm/kvm.o arm/kvm-cpu.o arm/pci.o arm/timer.o \
-  arm/pmu.o
+  arm/pmu.o arm/spe.o
 HDRS_ARM_COMMON:= arm/include
 ifeq ($(ARCH), arm)
DEFINES += -DCONFIG_ARM
diff --git a/arm/aarch64/arm-cpu.c b/arm/aarch64/arm-cpu.c
index d7572b7790b1..6ccea033f361 100644
--- a/arm/aarch64/arm-cpu.c
+++ b/arm/aarch64/arm-cpu.c
@@ -6,6 +6,7 @@
 #include "arm-common/gic.h"
 #include "arm-common/timer.h"
 #include "arm-common/pmu.h"
+#include "arm-common/spe.h"
 
 #include 
 #include 
@@ -17,6 +18,7 @@ static void generate_fdt_nodes(void *fdt, struct kvm *kvm)
gic__generate_fdt_nodes(fdt, kvm->cfg.arch.irqchip);
timer__generate_fdt_nodes(fdt, kvm, timer_interrupts);
pmu__generate_fdt_nodes(fdt, kvm);
+   spe__generate_fdt_nodes(fdt, kvm);
 }
 
 static int arm_cpu__vcpu_init(struct kvm_cpu *vcpu)
diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
index 7b7ac0f6cec9..4c9e168de896 100644
--- a/arm/aarch64/include/asm/kvm.h
+++ b/arm/aarch64/include/asm/kvm.h
@@ -106,6 +106,7 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_SVE   4 /* enable SVE for this CPU */
 #define KVM_ARM_VCPU_PTRAUTH_ADDRESS   5 /* VCPU uses address authentication */
 #define KVM_ARM_VCPU_PTRAUTH_GENERIC   6 /* VCPU uses generic authentication */
+#define KVM_ARM_VCPU_SPE_V17 /* Support guest SPEv1 */
 
 struct kvm_vcpu_init {
__u32 target;
@@ -306,6 +307,9 @@ struct kvm_vcpu_events {
 #define KVM_ARM_VCPU_TIMER_CTRL1
 #define   KVM_ARM_VCPU_TIMER_IRQ_VTIMER0
 #define   KVM_ARM_VCPU_TIMER_IRQ_PTIMER1
+#define KVM_ARM_VCPU_SPE_V1_CTRL   2
+#define   KVM_ARM_VCPU_SPE_V1_IRQ  0
+#define   KVM_ARM_VCPU_SPE_V1_INIT 1
 
 /* KVM_IRQ_LINE irq field index values */
 #define KVM_ARM_IRQ_TYPE_SHIFT 24
diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h 
b/arm/aarch64/include/kvm/kvm-config-arch.h
index 04be43dfa9b2..9968e1666de5 100644
--- a/arm/aarch64/include/kvm/kvm-config-arch.h
+++ b/arm/aarch64/include/kvm/kvm-config-arch.h
@@ -6,6 +6,8 @@
"Run AArch32 guest"),   \
OPT_BOOLEAN('\0', "pmu", &(cfg)->has_pmuv3, \
"Create PMUv3 device"), \
+   OPT_BOOLEAN('\0', "spe", &(cfg)->has_spev1, \
+   "Create SPEv1 device"), \
OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed, \
"Specify random seed for Kernel Address Space " \
"Layout Randomization (KASLR)"),
diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h 
b/arm/aarch64/include/kvm/kvm-cpu-arch.h
index a9d8563382c6..5abaf9505274 100644
--- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
@@ -8,7 +8,8 @@
 #define ARM_VCPU_FEATURE_FLAGS(kvm, cpuid) {   
\
[0] = ((!!(cpuid) << KVM_ARM_VCPU_POWER_OFF) |  
\
   (!!(kvm)->cfg.arch.aarch32_guest << KVM_ARM_VCPU_EL1_32BIT) |
\
-  (!!(kvm)->cfg.arch.has_pmuv3 << KVM_ARM_VCPU_PMU_V3))
\
+  (!!(kvm)->cfg.arch.has_pmuv3 << KVM_ARM_VCPU_PMU_V3) |   
\
+  (!!(kvm)->cfg.arch.has_spev1 << KVM_ARM_VCPU_SPE_V1))
\
 }
 
 #define ARM_MPIDR_HWID_BITMASK 0xFF00FFUL
diff --git a/arm/include/arm-common/kvm-config-arch.h 
b/arm/include/arm-common/kvm-config-arch.h
index 5734c46ab9e6..742733e289af 100644
--- a/arm/include/arm-common/kvm-config-arch.h
+++ b/arm/include/arm-common/k

[PATCH 00/15] arm64: KVM: add SPE profiling support for guest

2019-05-23 Thread Sudeep Holla
Hi,

This series implements support for allowing KVM guests to use the Arm
Statistical Profiling Extension (SPE).

The patches are also available on a branch[1]. The last two extra
patches are for the kvmtool if someone wants to play with it.

Regards,
Sudeep

v1->v2:
- Rebased on v5.2-rc1
- Adjusted sysreg_elx_s macros with merged clang build support

[1] git://git.kernel.org/pub/scm/linux/kernel/git/sudeep.holla/linux.git kvm_spe

Sudeep Holla (15):
  KVM: arm64: add {read,write}_sysreg_elx_s versions for new registers
  dt-bindings: ARM SPE: highlight the need for PPI partitions on
heterogeneous systems
  arm64: KVM: reset E2PB correctly in MDCR_EL2 when exiting the
guest(VHE)
  arm64: KVM: define SPE data structure for each vcpu
  arm64: KVM: add access handler for SPE system registers
  arm64: KVM/VHE: enable the use PMSCR_EL12 on VHE systems
  arm64: KVM: split debug save restore across vm/traps activation
  arm64: KVM/debug: drop pmscr_el1 and use sys_regs[PMSCR_EL1] in
kvm_cpu_context
  arm64: KVM: add support to save/restore SPE profiling buffer controls
  arm64: KVM: enable conditional save/restore full SPE profiling buffer
controls
  arm64: KVM/debug: trap all accesses to SPE controls at EL1
  KVM: arm64: add a new vcpu device control group for SPEv1
  KVM: arm64: enable SPE support
  KVMTOOL: update_headers: Sync kvm UAPI headers with linux v5.2-rc1
  KVMTOOL: kvm: add a vcpu feature for SPEv1 support

 .../devicetree/bindings/arm/spe-pmu.txt   |   5 +-
 Documentation/virtual/kvm/devices/vcpu.txt|  28 +++
 arch/arm64/boot/dts/arm/rtsm_ve-aemv8a.dts| 185 +++---
 arch/arm64/configs/defconfig  |   6 +
 arch/arm64/include/asm/kvm_host.h |  19 +-
 arch/arm64/include/asm/kvm_hyp.h  |  26 ++-
 arch/arm64/include/uapi/asm/kvm.h |   4 +
 arch/arm64/kvm/Kconfig|   7 +
 arch/arm64/kvm/Makefile   |   1 +
 arch/arm64/kvm/guest.c|   9 +
 arch/arm64/kvm/hyp/debug-sr.c |  98 +++---
 arch/arm64/kvm/hyp/switch.c   |  18 +-
 arch/arm64/kvm/reset.c|   3 +
 arch/arm64/kvm/sys_regs.c |  35 
 include/kvm/arm_spe.h |  71 +++
 include/uapi/linux/kvm.h  |   1 +
 virt/kvm/arm/arm.c|   5 +
 virt/kvm/arm/spe.c| 163 +++
 18 files changed, 570 insertions(+), 114 deletions(-)
 create mode 100644 include/kvm/arm_spe.h
 create mode 100644 virt/kvm/arm/spe.c

--
2.17.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v2 12/15] KVM: arm64: add a new vcpu device control group for SPEv1

2019-05-23 Thread Sudeep Holla
To configure the virtual SPEv1 overflow interrupt number, we use the
vcpu kvm_device ioctl, encapsulating the KVM_ARM_VCPU_SPE_V1_IRQ
attribute within the KVM_ARM_VCPU_SPE_V1_CTRL group.

After configuring the SPEv1, call the vcpu ioctl with attribute
KVM_ARM_VCPU_SPE_V1_INIT to initialize the SPEv1.

Signed-off-by: Sudeep Holla 
---
 Documentation/virtual/kvm/devices/vcpu.txt |  28 
 arch/arm64/include/asm/kvm_host.h  |   2 +-
 arch/arm64/include/uapi/asm/kvm.h  |   4 +
 arch/arm64/kvm/Makefile|   1 +
 arch/arm64/kvm/guest.c |   9 ++
 arch/arm64/kvm/reset.c |   3 +
 include/kvm/arm_spe.h  |  35 +
 include/uapi/linux/kvm.h   |   1 +
 virt/kvm/arm/arm.c |   1 +
 virt/kvm/arm/spe.c | 163 +
 10 files changed, 246 insertions(+), 1 deletion(-)
 create mode 100644 virt/kvm/arm/spe.c

diff --git a/Documentation/virtual/kvm/devices/vcpu.txt 
b/Documentation/virtual/kvm/devices/vcpu.txt
index 2b5dab16c4f2..d1ece488aeee 100644
--- a/Documentation/virtual/kvm/devices/vcpu.txt
+++ b/Documentation/virtual/kvm/devices/vcpu.txt
@@ -60,3 +60,31 @@ time to use the number provided for a given timer, 
overwriting any previously
 configured values on other VCPUs.  Userspace should configure the interrupt
 numbers on at least one VCPU after creating all VCPUs and before running any
 VCPUs.
+
+3. GROUP: KVM_ARM_VCPU_SPE_V1_CTRL
+Architectures: ARM64
+
+1.1. ATTRIBUTE: KVM_ARM_VCPU_SPE_V1_IRQ
+Parameters: in kvm_device_attr.addr the address for SPE buffer overflow 
interrupt
+   is a pointer to an int
+Returns: -EBUSY: The SPE overflow interrupt is already set
+ -ENXIO: The overflow interrupt not set when attempting to get it
+ -ENODEV: SPEv1 not supported
+ -EINVAL: Invalid SPE overflow interrupt number supplied or
+  trying to set the IRQ number without using an in-kernel
+  irqchip.
+
+A value describing the SPEv1 (Statistical Profiling Extension v1) overflow
+interrupt number for this vcpu. This interrupt should be PPI and the interrupt
+type and number must be same for each vcpu.
+
+1.2 ATTRIBUTE: KVM_ARM_VCPU_SPE_V1_INIT
+Parameters: no additional parameter in kvm_device_attr.addr
+Returns: -ENODEV: SPEv1 not supported or GIC not initialized
+ -ENXIO: SPEv1 not properly configured or in-kernel irqchip not
+ configured as required prior to calling this attribute
+ -EBUSY: SPEv1 already initialized
+
+Request the initialization of the SPEv1.  If using the SPEv1 with an in-kernel
+virtual GIC implementation, this must be done after initializing the in-kernel
+irqchip.
diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 6921fdfd477b..fc4ead0774b3 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -50,7 +50,7 @@
 
 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
 
-#define KVM_VCPU_MAX_FEATURES 7
+#define KVM_VCPU_MAX_FEATURES 8
 
 #define KVM_REQ_SLEEP \
KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
diff --git a/arch/arm64/include/uapi/asm/kvm.h 
b/arch/arm64/include/uapi/asm/kvm.h
index 7b7ac0f6cec9..4c9e168de896 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -106,6 +106,7 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_SVE   4 /* enable SVE for this CPU */
 #define KVM_ARM_VCPU_PTRAUTH_ADDRESS   5 /* VCPU uses address authentication */
 #define KVM_ARM_VCPU_PTRAUTH_GENERIC   6 /* VCPU uses generic authentication */
+#define KVM_ARM_VCPU_SPE_V17 /* Support guest SPEv1 */
 
 struct kvm_vcpu_init {
__u32 target;
@@ -306,6 +307,9 @@ struct kvm_vcpu_events {
 #define KVM_ARM_VCPU_TIMER_CTRL1
 #define   KVM_ARM_VCPU_TIMER_IRQ_VTIMER0
 #define   KVM_ARM_VCPU_TIMER_IRQ_PTIMER1
+#define KVM_ARM_VCPU_SPE_V1_CTRL   2
+#define   KVM_ARM_VCPU_SPE_V1_IRQ  0
+#define   KVM_ARM_VCPU_SPE_V1_INIT 1
 
 /* KVM_IRQ_LINE irq field index values */
 #define KVM_ARM_IRQ_TYPE_SHIFT 24
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 3ac1a64d2fb9..1ba6154dd8e1 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -35,3 +35,4 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-debug.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/irqchip.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arch_timer.o
 kvm-$(CONFIG_KVM_ARM_PMU) += $(KVM)/arm/pmu.o
+kvm-$(CONFIG_KVM_ARM_SPE) += $(KVM)/arm/spe.o
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 3ae2f82fca46..02c28a7eb332 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -848,6 +848,9 @@ int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
case KVM_ARM_VCPU_TIMER_CTRL:
ret = kvm_arm_timer_set_attr(vcpu, attr

[PATCH v2 02/15] dt-bindings: ARM SPE: highlight the need for PPI partitions on heterogeneous systems

2019-05-23 Thread Sudeep Holla
It's not entirely clear for the binding document that the only way to
express ARM SPE affined to a subset of CPUs on a heterogeneous systems
is through the use of PPI partitions available in the interrupt
controller bindings.

Let's make it clear.

Signed-off-by: Sudeep Holla 
---
 Documentation/devicetree/bindings/arm/spe-pmu.txt | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/Documentation/devicetree/bindings/arm/spe-pmu.txt 
b/Documentation/devicetree/bindings/arm/spe-pmu.txt
index 93372f2a7df9..4f4815800f6e 100644
--- a/Documentation/devicetree/bindings/arm/spe-pmu.txt
+++ b/Documentation/devicetree/bindings/arm/spe-pmu.txt
@@ -9,8 +9,9 @@ performance sample data using an in-memory trace buffer.
   "arm,statistical-profiling-extension-v1"
 
 - interrupts : Exactly 1 PPI must be listed. For heterogeneous systems where
-   SPE is only supported on a subset of the CPUs, please consult
-  the arm,gic-v3 binding for details on describing a PPI partition.
+   SPE is only supported on a subset of the CPUs, a PPI partition
+  described in the arm,gic-v3 binding must be used to describe
+  the set of CPUs this interrupt is affine to.
 
 ** Example:
 
-- 
2.17.1



[PATCH v2 10/15] arm64: KVM: enable conditional save/restore full SPE profiling buffer controls

2019-05-23 Thread Sudeep Holla
Now that we can save/restore the full SPE controls, we can enable it
if SPE is setup and ready to use in KVM. It's supported in KVM only if
all the CPUs in the system supports SPE.

However to support heterogenous systems, we need to move the check if
host supports SPE and do a partial save/restore.

Signed-off-by: Sudeep Holla 
---
 arch/arm64/kvm/hyp/debug-sr.c | 33 -
 include/kvm/arm_spe.h |  3 +++
 2 files changed, 19 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/kvm/hyp/debug-sr.c b/arch/arm64/kvm/hyp/debug-sr.c
index a4e6eaf5934f..cd0a7571abc1 100644
--- a/arch/arm64/kvm/hyp/debug-sr.c
+++ b/arch/arm64/kvm/hyp/debug-sr.c
@@ -67,18 +67,13 @@
}
 
 static void __hyp_text
-__debug_save_spe_nvhe(struct kvm_cpu_context *ctxt, bool full_ctxt)
+__debug_save_spe_context(struct kvm_cpu_context *ctxt, bool full_ctxt)
 {
u64 reg;
 
/* Clear pmscr in case of early return */
ctxt->sys_regs[PMSCR_EL1] = 0;
 
-   /* SPE present on this CPU? */
-   if (!cpuid_feature_extract_unsigned_field(read_sysreg(id_aa64dfr0_el1),
- ID_AA64DFR0_PMSVER_SHIFT))
-   return;
-
/* Yes; is it owned by higher EL? */
reg = read_sysreg_s(SYS_PMBIDR_EL1);
if (reg & BIT(SYS_PMBIDR_EL1_P_SHIFT))
@@ -114,7 +109,7 @@ __debug_save_spe_nvhe(struct kvm_cpu_context *ctxt, bool 
full_ctxt)
 }
 
 static void __hyp_text
-__debug_restore_spe_nvhe(struct kvm_cpu_context *ctxt, bool full_ctxt)
+__debug_restore_spe_context(struct kvm_cpu_context *ctxt, bool full_ctxt)
 {
if (!ctxt->sys_regs[PMSCR_EL1])
return;
@@ -182,11 +177,14 @@ void __hyp_text __debug_restore_guest_context(struct 
kvm_vcpu *vcpu)
struct kvm_guest_debug_arch *host_dbg;
struct kvm_guest_debug_arch *guest_dbg;
 
+   host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
+   guest_ctxt = >arch.ctxt;
+
+   __debug_restore_spe_context(guest_ctxt, kvm_arm_spe_v1_ready(vcpu));
+
if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
return;
 
-   host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
-   guest_ctxt = >arch.ctxt;
host_dbg = >arch.host_debug_state.regs;
guest_dbg = kern_hyp_va(vcpu->arch.debug_ptr);
 
@@ -204,8 +202,7 @@ void __hyp_text __debug_restore_host_context(struct 
kvm_vcpu *vcpu)
host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
guest_ctxt = >arch.ctxt;
 
-   if (!has_vhe())
-   __debug_restore_spe_nvhe(host_ctxt, false);
+   __debug_restore_spe_context(host_ctxt, kvm_arm_spe_v1_ready(vcpu));
 
if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
return;
@@ -221,19 +218,21 @@ void __hyp_text __debug_restore_host_context(struct 
kvm_vcpu *vcpu)
 
 void __hyp_text __debug_save_host_context(struct kvm_vcpu *vcpu)
 {
-   /*
-* Non-VHE: Disable and flush SPE data generation
-* VHE: The vcpu can run, but it can't hide.
-*/
struct kvm_cpu_context *host_ctxt;
 
host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
-   if (!has_vhe())
-   __debug_save_spe_nvhe(host_ctxt, false);
+   if (cpuid_feature_extract_unsigned_field(read_sysreg(id_aa64dfr0_el1),
+ID_AA64DFR0_PMSVER_SHIFT))
+   __debug_save_spe_context(host_ctxt, kvm_arm_spe_v1_ready(vcpu));
 }
 
 void __hyp_text __debug_save_guest_context(struct kvm_vcpu *vcpu)
 {
+   bool kvm_spe_ready = kvm_arm_spe_v1_ready(vcpu);
+
+   /* SPE present on this vCPU? */
+   if (kvm_spe_ready)
+   __debug_save_spe_context(>arch.ctxt, kvm_spe_ready);
 }
 
 u32 __hyp_text __kvm_get_mdcr_el2(void)
diff --git a/include/kvm/arm_spe.h b/include/kvm/arm_spe.h
index 2440ff02f747..fdcb0df1e0fd 100644
--- a/include/kvm/arm_spe.h
+++ b/include/kvm/arm_spe.h
@@ -18,6 +18,8 @@ struct kvm_spe {
 
 #ifdef CONFIG_KVM_ARM_SPE
 
+#define kvm_arm_spe_v1_ready(v)((v)->arch.spe.ready)
+
 static inline bool kvm_arm_support_spe_v1(void)
 {
u64 dfr0 = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
@@ -27,6 +29,7 @@ static inline bool kvm_arm_support_spe_v1(void)
 }
 #else
 
+#define kvm_arm_spe_v1_ready(v)(false)
 #define kvm_arm_support_spe_v1()   (false)
 #endif /* CONFIG_KVM_ARM_SPE */
 
-- 
2.17.1



[PATCH v2 05/15] arm64: KVM: add access handler for SPE system registers

2019-05-23 Thread Sudeep Holla
SPE Profiling Buffer owning EL is configurable and when MDCR_EL2.E2PB
is configured to provide buffer ownership to EL1, the control registers
are trapped.

Add access handlers for the Statistical Profiling Extension(SPE)
Profiling Buffer controls registers. This is need to support profiling
using SPE in the guests.

Signed-off-by: Sudeep Holla 
---
 arch/arm64/include/asm/kvm_host.h | 13 
 arch/arm64/kvm/sys_regs.c | 35 +++
 include/kvm/arm_spe.h | 15 +
 3 files changed, 63 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 611a4884fb6c..559aa6931291 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -147,6 +147,19 @@ enum vcpu_sysreg {
MDCCINT_EL1,/* Monitor Debug Comms Channel Interrupt Enable Reg */
DISR_EL1,   /* Deferred Interrupt Status Register */
 
+   /* Statistical Profiling Extension Registers */
+
+   PMSCR_EL1,
+   PMSICR_EL1,
+   PMSIRR_EL1,
+   PMSFCR_EL1,
+   PMSEVFR_EL1,
+   PMSLATFR_EL1,
+   PMSIDR_EL1,
+   PMBLIMITR_EL1,
+   PMBPTR_EL1,
+   PMBSR_EL1,
+
/* Performance Monitors Registers */
PMCR_EL0,   /* Control Register */
PMSELR_EL0, /* Event Counter Selection Register */
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 857b226bcdde..dbf5056828d3 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -646,6 +646,30 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct 
sys_reg_desc *r)
__vcpu_sys_reg(vcpu, PMCR_EL0) = val;
 }
 
+static bool access_pmsb_val(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+   const struct sys_reg_desc *r)
+{
+   if (p->is_write)
+   vcpu_write_sys_reg(vcpu, p->regval, r->reg);
+   else
+   p->regval = vcpu_read_sys_reg(vcpu, r->reg);
+
+   return true;
+}
+
+static void reset_pmsb_val(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+   if (!kvm_arm_support_spe_v1()) {
+   __vcpu_sys_reg(vcpu, r->reg) = 0;
+   return;
+   }
+
+   if (r->reg == PMSIDR_EL1)
+   __vcpu_sys_reg(vcpu, r->reg) = read_sysreg_s(SYS_PMSIDR_EL1);
+   else
+   __vcpu_sys_reg(vcpu, r->reg) = 0;
+}
+
 static bool check_pmu_access_disabled(struct kvm_vcpu *vcpu, u64 flags)
 {
u64 reg = __vcpu_sys_reg(vcpu, PMUSERENR_EL0);
@@ -1513,6 +1537,17 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_FAR_EL1), access_vm_reg, reset_unknown, FAR_EL1 },
{ SYS_DESC(SYS_PAR_EL1), NULL, reset_unknown, PAR_EL1 },
 
+   { SYS_DESC(SYS_PMSCR_EL1), access_pmsb_val, reset_pmsb_val, PMSCR_EL1 },
+   { SYS_DESC(SYS_PMSICR_EL1), access_pmsb_val, reset_pmsb_val, PMSICR_EL1 
},
+   { SYS_DESC(SYS_PMSIRR_EL1), access_pmsb_val, reset_pmsb_val, PMSIRR_EL1 
},
+   { SYS_DESC(SYS_PMSFCR_EL1), access_pmsb_val, reset_pmsb_val, PMSFCR_EL1 
},
+   { SYS_DESC(SYS_PMSEVFR_EL1), access_pmsb_val, reset_pmsb_val, 
PMSEVFR_EL1},
+   { SYS_DESC(SYS_PMSLATFR_EL1), access_pmsb_val, reset_pmsb_val, 
PMSLATFR_EL1 },
+   { SYS_DESC(SYS_PMSIDR_EL1), access_pmsb_val, reset_pmsb_val, PMSIDR_EL1 
},
+   { SYS_DESC(SYS_PMBLIMITR_EL1), access_pmsb_val, reset_pmsb_val, 
PMBLIMITR_EL1 },
+   { SYS_DESC(SYS_PMBPTR_EL1), access_pmsb_val, reset_pmsb_val, PMBPTR_EL1 
},
+   { SYS_DESC(SYS_PMBSR_EL1), access_pmsb_val, reset_pmsb_val, PMBSR_EL1 },
+
{ SYS_DESC(SYS_PMINTENSET_EL1), access_pminten, reset_unknown, 
PMINTENSET_EL1 },
{ SYS_DESC(SYS_PMINTENCLR_EL1), access_pminten, NULL, PMINTENSET_EL1 },
 
diff --git a/include/kvm/arm_spe.h b/include/kvm/arm_spe.h
index 8c96bdfad6ac..2440ff02f747 100644
--- a/include/kvm/arm_spe.h
+++ b/include/kvm/arm_spe.h
@@ -8,6 +8,7 @@
 
 #include 
 #include 
+#include 
 
 struct kvm_spe {
int irq;
@@ -15,4 +16,18 @@ struct kvm_spe {
bool created; /* SPE KVM instance is created, may not be ready yet */
 };
 
+#ifdef CONFIG_KVM_ARM_SPE
+
+static inline bool kvm_arm_support_spe_v1(void)
+{
+   u64 dfr0 = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
+
+   return !!cpuid_feature_extract_unsigned_field(dfr0,
+ ID_AA64DFR0_PMSVER_SHIFT);
+}
+#else
+
+#define kvm_arm_support_spe_v1()   (false)
+#endif /* CONFIG_KVM_ARM_SPE */
+
 #endif /* __ASM_ARM_KVM_SPE_H */
-- 
2.17.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v2 04/15] arm64: KVM: define SPE data structure for each vcpu

2019-05-23 Thread Sudeep Holla
In order to support virtual SPE for guest, so define some basic structs.
This features depends on host having hardware with SPE support.

Since we can support this only on ARM64, add a separate config symbol
for the same.

Signed-off-by: Sudeep Holla 
---
 arch/arm64/include/asm/kvm_host.h |  2 ++
 arch/arm64/kvm/Kconfig|  7 +++
 include/kvm/arm_spe.h | 18 ++
 3 files changed, 27 insertions(+)
 create mode 100644 include/kvm/arm_spe.h

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 2a8d3f8ca22c..611a4884fb6c 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -46,6 +46,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
 
@@ -304,6 +305,7 @@ struct kvm_vcpu_arch {
struct vgic_cpu vgic_cpu;
struct arch_timer_cpu timer_cpu;
struct kvm_pmu pmu;
+   struct kvm_spe spe;
 
/*
 * Anything that is not used directly from assembly code goes
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index a67121d419a2..3e178894ddd8 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -33,6 +33,7 @@ config KVM
select HAVE_KVM_EVENTFD
select HAVE_KVM_IRQFD
select KVM_ARM_PMU if HW_PERF_EVENTS
+   select KVM_ARM_SPE if (HW_PERF_EVENTS && ARM_SPE_PMU)
select HAVE_KVM_MSI
select HAVE_KVM_IRQCHIP
select HAVE_KVM_IRQ_ROUTING
@@ -57,6 +58,12 @@ config KVM_ARM_PMU
  Adds support for a virtual Performance Monitoring Unit (PMU) in
  virtual machines.
 
+config KVM_ARM_SPE
+   bool
+   ---help---
+ Adds support for a virtual Statistical Profiling Extension(SPE) in
+ virtual machines.
+
 config KVM_INDIRECT_VECTORS
def_bool KVM && (HARDEN_BRANCH_PREDICTOR || HARDEN_EL2_VECTORS)
 
diff --git a/include/kvm/arm_spe.h b/include/kvm/arm_spe.h
new file mode 100644
index ..8c96bdfad6ac
--- /dev/null
+++ b/include/kvm/arm_spe.h
@@ -0,0 +1,18 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2018 ARM Ltd.
+ */
+
+#ifndef __ASM_ARM_KVM_SPE_H
+#define __ASM_ARM_KVM_SPE_H
+
+#include 
+#include 
+
+struct kvm_spe {
+   int irq;
+   bool ready; /* indicates that SPE KVM instance is ready for use */
+   bool created; /* SPE KVM instance is created, may not be ready yet */
+};
+
+#endif /* __ASM_ARM_KVM_SPE_H */
-- 
2.17.1



[PATCH v2 03/15] arm64: KVM: reset E2PB correctly in MDCR_EL2 when exiting the guest(VHE)

2019-05-23 Thread Sudeep Holla
On VHE systems, the reset value for MDCR_EL2.E2PB=b00 which defaults
to profiling buffer using the EL2 stage 1 translations. However if the
guest are allowed to use profiling buffers changing E2PB settings, we
need to ensure we resume back MDCR_EL2.E2PB=b00. Currently we just
do bitwise '&' with MDCR_EL2_E2PB_MASK which will retain the value.

So fix it by clearing all the bits in E2PB.

Signed-off-by: Sudeep Holla 
---
 arch/arm64/kvm/hyp/switch.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 22b4c335e0b2..9b2461138ddc 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -202,9 +202,7 @@ void deactivate_traps_vhe_put(void)
 {
u64 mdcr_el2 = read_sysreg(mdcr_el2);
 
-   mdcr_el2 &= MDCR_EL2_HPMN_MASK |
-   MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT |
-   MDCR_EL2_TPMS;
+   mdcr_el2 &= MDCR_EL2_HPMN_MASK | MDCR_EL2_TPMS;
 
write_sysreg(mdcr_el2, mdcr_el2);
 
-- 
2.17.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v2 11/15] arm64: KVM/debug: trap all accesses to SPE controls at EL1

2019-05-23 Thread Sudeep Holla
Now that we have all the save/restore mechanism in place, lets enable
trapping of accesses to SPE profiling buffer controls at EL1 to EL2.
This will also change the translation regime used by buffer from EL2
stage 1 to EL1 stage 1 on VHE systems.

Signed-off-by: Sudeep Holla 
---
 arch/arm64/kvm/hyp/switch.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 844f0dd7a7f0..881901825a85 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -110,6 +110,7 @@ static void activate_traps_vhe(struct kvm_vcpu *vcpu)
 
write_sysreg(val, cpacr_el1);
 
+   write_sysreg(vcpu->arch.mdcr_el2 | 2 << MDCR_EL2_E2PB_SHIFT, mdcr_el2);
write_sysreg(kvm_get_hyp_vector(), vbar_el1);
 }
 NOKPROBE_SYMBOL(activate_traps_vhe);
@@ -127,6 +128,7 @@ static void __hyp_text __activate_traps_nvhe(struct 
kvm_vcpu *vcpu)
__activate_traps_fpsimd32(vcpu);
}
 
+   write_sysreg(vcpu->arch.mdcr_el2 | 2 << MDCR_EL2_E2PB_SHIFT, mdcr_el2);
write_sysreg(val, cptr_el2);
 }
 
-- 
2.17.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v2 13/15] KVM: arm64: enable SPE support

2019-05-23 Thread Sudeep Holla
We have all the bits and pieces to enable SPE for guest in place, so
lets enable it.

Signed-off-by: Sudeep Holla 
---
 virt/kvm/arm/arm.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index c5b711ef1cf8..935e2ed02b2e 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -577,6 +577,10 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
return ret;
 
ret = kvm_arm_pmu_v3_enable(vcpu);
+   if (ret)
+   return ret;
+
+   ret = kvm_arm_spe_v1_enable(vcpu);
 
return ret;
 }
-- 
2.17.1



[PATCH 10/13] arm64: KVM: enable conditional save/restore full SPE profiling buffer controls

2019-02-28 Thread Sudeep Holla
Now that we can save/restore the full SPE controls, we can enable it
if SPE is setup and ready to use in KVM. It's supported in KVM only if
all the CPUs in the system supports SPE.

However to support heterogenous systems, we need to move the check if
host supports SPE and do a partial save/restore.

Signed-off-by: Sudeep Holla 
---
 arch/arm64/kvm/hyp/debug-sr.c | 33 -
 include/kvm/arm_spe.h |  3 +++
 2 files changed, 19 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/kvm/hyp/debug-sr.c b/arch/arm64/kvm/hyp/debug-sr.c
index a4e6eaf5934f..cd0a7571abc1 100644
--- a/arch/arm64/kvm/hyp/debug-sr.c
+++ b/arch/arm64/kvm/hyp/debug-sr.c
@@ -67,18 +67,13 @@
}
 
 static void __hyp_text
-__debug_save_spe_nvhe(struct kvm_cpu_context *ctxt, bool full_ctxt)
+__debug_save_spe_context(struct kvm_cpu_context *ctxt, bool full_ctxt)
 {
u64 reg;
 
/* Clear pmscr in case of early return */
ctxt->sys_regs[PMSCR_EL1] = 0;
 
-   /* SPE present on this CPU? */
-   if (!cpuid_feature_extract_unsigned_field(read_sysreg(id_aa64dfr0_el1),
- ID_AA64DFR0_PMSVER_SHIFT))
-   return;
-
/* Yes; is it owned by higher EL? */
reg = read_sysreg_s(SYS_PMBIDR_EL1);
if (reg & BIT(SYS_PMBIDR_EL1_P_SHIFT))
@@ -114,7 +109,7 @@ __debug_save_spe_nvhe(struct kvm_cpu_context *ctxt, bool 
full_ctxt)
 }
 
 static void __hyp_text
-__debug_restore_spe_nvhe(struct kvm_cpu_context *ctxt, bool full_ctxt)
+__debug_restore_spe_context(struct kvm_cpu_context *ctxt, bool full_ctxt)
 {
if (!ctxt->sys_regs[PMSCR_EL1])
return;
@@ -182,11 +177,14 @@ void __hyp_text __debug_restore_guest_context(struct 
kvm_vcpu *vcpu)
struct kvm_guest_debug_arch *host_dbg;
struct kvm_guest_debug_arch *guest_dbg;
 
+   host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
+   guest_ctxt = >arch.ctxt;
+
+   __debug_restore_spe_context(guest_ctxt, kvm_arm_spe_v1_ready(vcpu));
+
if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
return;
 
-   host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
-   guest_ctxt = >arch.ctxt;
host_dbg = >arch.host_debug_state.regs;
guest_dbg = kern_hyp_va(vcpu->arch.debug_ptr);
 
@@ -204,8 +202,7 @@ void __hyp_text __debug_restore_host_context(struct 
kvm_vcpu *vcpu)
host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
guest_ctxt = >arch.ctxt;
 
-   if (!has_vhe())
-   __debug_restore_spe_nvhe(host_ctxt, false);
+   __debug_restore_spe_context(host_ctxt, kvm_arm_spe_v1_ready(vcpu));
 
if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
return;
@@ -221,19 +218,21 @@ void __hyp_text __debug_restore_host_context(struct 
kvm_vcpu *vcpu)
 
 void __hyp_text __debug_save_host_context(struct kvm_vcpu *vcpu)
 {
-   /*
-* Non-VHE: Disable and flush SPE data generation
-* VHE: The vcpu can run, but it can't hide.
-*/
struct kvm_cpu_context *host_ctxt;
 
host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
-   if (!has_vhe())
-   __debug_save_spe_nvhe(host_ctxt, false);
+   if (cpuid_feature_extract_unsigned_field(read_sysreg(id_aa64dfr0_el1),
+ID_AA64DFR0_PMSVER_SHIFT))
+   __debug_save_spe_context(host_ctxt, kvm_arm_spe_v1_ready(vcpu));
 }
 
 void __hyp_text __debug_save_guest_context(struct kvm_vcpu *vcpu)
 {
+   bool kvm_spe_ready = kvm_arm_spe_v1_ready(vcpu);
+
+   /* SPE present on this vCPU? */
+   if (kvm_spe_ready)
+   __debug_save_spe_context(>arch.ctxt, kvm_spe_ready);
 }
 
 u32 __hyp_text __kvm_get_mdcr_el2(void)
diff --git a/include/kvm/arm_spe.h b/include/kvm/arm_spe.h
index 5678f80e1528..132efd636722 100644
--- a/include/kvm/arm_spe.h
+++ b/include/kvm/arm_spe.h
@@ -18,6 +18,8 @@ struct kvm_spe {
 
 #ifdef CONFIG_KVM_ARM_SPE
 
+#define kvm_arm_spe_v1_ready(v)((v)->arch.spe.ready)
+
 static inline bool kvm_arm_support_spe_v1(void)
 {
u64 dfr0 = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
@@ -26,6 +28,7 @@ static inline bool kvm_arm_support_spe_v1(void)
 }
 #else
 
+#define kvm_arm_spe_v1_ready(v)(false)
 #define kvm_arm_support_spe_v1()   (false)
 #endif /* CONFIG_KVM_ARM_SPE */
 
-- 
2.17.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH 02/13] dt-bindings: ARM SPE: highlight the need for PPI partitions on heterogeneous systems

2019-02-28 Thread Sudeep Holla
It's not entirely clear for the binding document that the only way to
express ARM SPE affined to a subset of CPUs on a heterogeneous systems
is through the use of PPI partitions available in the interrupt
controller bindings.

Let's make it clear.

Signed-off-by: Sudeep Holla 
---
 Documentation/devicetree/bindings/arm/spe-pmu.txt | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/Documentation/devicetree/bindings/arm/spe-pmu.txt 
b/Documentation/devicetree/bindings/arm/spe-pmu.txt
index 93372f2a7df9..4f4815800f6e 100644
--- a/Documentation/devicetree/bindings/arm/spe-pmu.txt
+++ b/Documentation/devicetree/bindings/arm/spe-pmu.txt
@@ -9,8 +9,9 @@ performance sample data using an in-memory trace buffer.
   "arm,statistical-profiling-extension-v1"
 
 - interrupts : Exactly 1 PPI must be listed. For heterogeneous systems where
-   SPE is only supported on a subset of the CPUs, please consult
-  the arm,gic-v3 binding for details on describing a PPI partition.
+   SPE is only supported on a subset of the CPUs, a PPI partition
+  described in the arm,gic-v3 binding must be used to describe
+  the set of CPUs this interrupt is affine to.
 
 ** Example:
 
-- 
2.17.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH 04/13] arm64: KVM: define SPE data structure for each vcpu

2019-02-28 Thread Sudeep Holla
In order to support virtual SPE for guest, so define some basic structs.
This features depends on host having hardware with SPE support.

Since we can support this only on ARM64, add a separate config symbol
for the same.

Signed-off-by: Sudeep Holla 
---
 arch/arm64/include/asm/kvm_host.h |  2 ++
 arch/arm64/kvm/Kconfig|  7 +++
 include/kvm/arm_spe.h | 18 ++
 3 files changed, 27 insertions(+)
 create mode 100644 include/kvm/arm_spe.h

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index da3fc7324d68..6714d6a0ef1e 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -40,6 +40,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
 
@@ -266,6 +267,7 @@ struct kvm_vcpu_arch {
struct vgic_cpu vgic_cpu;
struct arch_timer_cpu timer_cpu;
struct kvm_pmu pmu;
+   struct kvm_spe spe;
 
/*
 * Anything that is not used directly from assembly code goes
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index a3f85624313e..c51b125ed63f 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -34,6 +34,7 @@ config KVM
select HAVE_KVM_EVENTFD
select HAVE_KVM_IRQFD
select KVM_ARM_PMU if HW_PERF_EVENTS
+   select KVM_ARM_SPE if (HW_PERF_EVENTS && ARM_SPE_PMU)
select HAVE_KVM_MSI
select HAVE_KVM_IRQCHIP
select HAVE_KVM_IRQ_ROUTING
@@ -58,6 +59,12 @@ config KVM_ARM_PMU
  Adds support for a virtual Performance Monitoring Unit (PMU) in
  virtual machines.
 
+config KVM_ARM_SPE
+   bool
+   ---help---
+ Adds support for a virtual Statistical Profiling Extension(SPE) in
+ virtual machines.
+
 config KVM_INDIRECT_VECTORS
def_bool KVM && (HARDEN_BRANCH_PREDICTOR || HARDEN_EL2_VECTORS)
 
diff --git a/include/kvm/arm_spe.h b/include/kvm/arm_spe.h
new file mode 100644
index ..8c96bdfad6ac
--- /dev/null
+++ b/include/kvm/arm_spe.h
@@ -0,0 +1,18 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2018 ARM Ltd.
+ */
+
+#ifndef __ASM_ARM_KVM_SPE_H
+#define __ASM_ARM_KVM_SPE_H
+
+#include 
+#include 
+
+struct kvm_spe {
+   int irq;
+   bool ready; /* indicates that SPE KVM instance is ready for use */
+   bool created; /* SPE KVM instance is created, may not be ready yet */
+};
+
+#endif /* __ASM_ARM_KVM_SPE_H */
-- 
2.17.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[KVMTOOL PATCH 15/15] kvm: add a vcpu feature for SPEv1 support

2019-02-28 Thread Sudeep Holla
This is a runtime configurable for KVM tool to enable Statistical
Profiling Extensions version 1 support in guest kernel. A command line
option --spe is required to use the same.

Signed-off-by: Sudeep Holla 
---
 Makefile  |  2 +-
 arm/aarch64/arm-cpu.c |  2 +
 arm/aarch64/include/asm/kvm.h |  4 ++
 arm/aarch64/include/kvm/kvm-config-arch.h |  2 +
 arm/aarch64/include/kvm/kvm-cpu-arch.h|  3 +-
 arm/include/arm-common/kvm-config-arch.h  |  1 +
 arm/include/arm-common/spe.h  |  4 ++
 arm/spe.c | 81 +++
 include/linux/kvm.h   |  1 +
 9 files changed, 98 insertions(+), 2 deletions(-)
 create mode 100644 arm/include/arm-common/spe.h
 create mode 100644 arm/spe.c

diff --git a/Makefile b/Makefile
index a71efb664c87..828b14ec6e08 100644
--- a/Makefile
+++ b/Makefile
@@ -155,7 +155,7 @@ endif
 # ARM
 OBJS_ARM_COMMON:= arm/fdt.o arm/gic.o arm/gicv2m.o 
arm/ioport.o \
   arm/kvm.o arm/kvm-cpu.o arm/pci.o arm/timer.o \
-  arm/pmu.o
+  arm/pmu.o arm/spe.o
 HDRS_ARM_COMMON:= arm/include
 ifeq ($(ARCH), arm)
DEFINES += -DCONFIG_ARM
diff --git a/arm/aarch64/arm-cpu.c b/arm/aarch64/arm-cpu.c
index d7572b7790b1..6ccea033f361 100644
--- a/arm/aarch64/arm-cpu.c
+++ b/arm/aarch64/arm-cpu.c
@@ -6,6 +6,7 @@
 #include "arm-common/gic.h"
 #include "arm-common/timer.h"
 #include "arm-common/pmu.h"
+#include "arm-common/spe.h"
 
 #include 
 #include 
@@ -17,6 +18,7 @@ static void generate_fdt_nodes(void *fdt, struct kvm *kvm)
gic__generate_fdt_nodes(fdt, kvm->cfg.arch.irqchip);
timer__generate_fdt_nodes(fdt, kvm, timer_interrupts);
pmu__generate_fdt_nodes(fdt, kvm);
+   spe__generate_fdt_nodes(fdt, kvm);
 }
 
 static int arm_cpu__vcpu_init(struct kvm_cpu *vcpu)
diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
index 97c3478ee6e7..152f8b9d8c1a 100644
--- a/arm/aarch64/include/asm/kvm.h
+++ b/arm/aarch64/include/asm/kvm.h
@@ -102,6 +102,7 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_EL1_32BIT 1 /* CPU running a 32bit VM */
 #define KVM_ARM_VCPU_PSCI_0_2  2 /* CPU uses PSCI v0.2 */
 #define KVM_ARM_VCPU_PMU_V33 /* Support guest PMUv3 */
+#define KVM_ARM_VCPU_SPE_V14 /* Support guest SPEv1 */
 
 struct kvm_vcpu_init {
__u32 target;
@@ -263,6 +264,9 @@ struct kvm_vcpu_events {
 #define KVM_ARM_VCPU_TIMER_CTRL1
 #define   KVM_ARM_VCPU_TIMER_IRQ_VTIMER0
 #define   KVM_ARM_VCPU_TIMER_IRQ_PTIMER1
+#define KVM_ARM_VCPU_SPE_V1_CTRL   2
+#define   KVM_ARM_VCPU_SPE_V1_IRQ  0
+#define   KVM_ARM_VCPU_SPE_V1_INIT 1
 
 /* KVM_IRQ_LINE irq field index values */
 #define KVM_ARM_IRQ_TYPE_SHIFT 24
diff --git a/arm/aarch64/include/kvm/kvm-config-arch.h 
b/arm/aarch64/include/kvm/kvm-config-arch.h
index 04be43dfa9b2..9968e1666de5 100644
--- a/arm/aarch64/include/kvm/kvm-config-arch.h
+++ b/arm/aarch64/include/kvm/kvm-config-arch.h
@@ -6,6 +6,8 @@
"Run AArch32 guest"),   \
OPT_BOOLEAN('\0', "pmu", &(cfg)->has_pmuv3, \
"Create PMUv3 device"), \
+   OPT_BOOLEAN('\0', "spe", &(cfg)->has_spev1, \
+   "Create SPEv1 device"), \
OPT_U64('\0', "kaslr-seed", &(cfg)->kaslr_seed, \
"Specify random seed for Kernel Address Space " \
"Layout Randomization (KASLR)"),
diff --git a/arm/aarch64/include/kvm/kvm-cpu-arch.h 
b/arm/aarch64/include/kvm/kvm-cpu-arch.h
index a9d8563382c6..5abaf9505274 100644
--- a/arm/aarch64/include/kvm/kvm-cpu-arch.h
+++ b/arm/aarch64/include/kvm/kvm-cpu-arch.h
@@ -8,7 +8,8 @@
 #define ARM_VCPU_FEATURE_FLAGS(kvm, cpuid) {   
\
[0] = ((!!(cpuid) << KVM_ARM_VCPU_POWER_OFF) |  
\
   (!!(kvm)->cfg.arch.aarch32_guest << KVM_ARM_VCPU_EL1_32BIT) |
\
-  (!!(kvm)->cfg.arch.has_pmuv3 << KVM_ARM_VCPU_PMU_V3))
\
+  (!!(kvm)->cfg.arch.has_pmuv3 << KVM_ARM_VCPU_PMU_V3) |   
\
+  (!!(kvm)->cfg.arch.has_spev1 << KVM_ARM_VCPU_SPE_V1))
\
 }
 
 #define ARM_MPIDR_HWID_BITMASK 0xFF00FFUL
diff --git a/arm/include/arm-common/kvm-config-arch.h 
b/arm/include/arm-common/kvm-config-arch.h
index 6a196f1852de..2147fc4d04ee 100644
--- a/arm/include/arm-common/kvm-config-arch.h
+++ b/arm/include/arm-common/kvm-config-arch.h
@@ -9,6 +9,7 @@ s

[PATCH 09/13] arm64: KVM: add support to save/restore SPE profiling buffer controls

2019-02-28 Thread Sudeep Holla
Currently since we don't support profiling using SPE in the guests,
we just save the PMSCR_EL1, flush the profiling buffers and disable
sampling. However in order to support simultaneous sampling both in
the host and guests, we need to save and reatore the complete SPE
profiling buffer controls' context.

Let's add the support for the same and keep it disabled for now.
We can enable it conditionally only if guests are allowed to use
SPE.

Signed-off-by: Sudeep Holla 
---
 arch/arm64/kvm/hyp/debug-sr.c | 44 ---
 1 file changed, 35 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kvm/hyp/debug-sr.c b/arch/arm64/kvm/hyp/debug-sr.c
index a2714a5eb3e9..a4e6eaf5934f 100644
--- a/arch/arm64/kvm/hyp/debug-sr.c
+++ b/arch/arm64/kvm/hyp/debug-sr.c
@@ -66,7 +66,8 @@
default:write_debug(ptr[0], reg, 0);\
}
 
-static void __hyp_text __debug_save_spe_nvhe(struct kvm_cpu_context *ctxt)
+static void __hyp_text
+__debug_save_spe_nvhe(struct kvm_cpu_context *ctxt, bool full_ctxt)
 {
u64 reg;
 
@@ -83,22 +84,37 @@ static void __hyp_text __debug_save_spe_nvhe(struct 
kvm_cpu_context *ctxt)
if (reg & BIT(SYS_PMBIDR_EL1_P_SHIFT))
return;
 
-   /* No; is the host actually using the thing? */
-   reg = read_sysreg_s(SYS_PMBLIMITR_EL1);
-   if (!(reg & BIT(SYS_PMBLIMITR_EL1_E_SHIFT)))
+   /* Save the control register and disable data generation */
+   ctxt->sys_regs[PMSCR_EL1] = read_sysreg_el1_s(SYS_PMSCR);
+
+   if (!ctxt->sys_regs[PMSCR_EL1])
return;
 
-   /* Yes; save the control register and disable data generation */
-   ctxt->sys_regs[PMSCR_EL1] = read_sysreg_el1_s(SYS_PMSCR);
write_sysreg_el1_s(0, SYS_PMSCR);
isb();
 
/* Now drain all buffered data to memory */
psb_csync();
dsb(nsh);
+
+   if (!full_ctxt)
+   return;
+
+   ctxt->sys_regs[PMBLIMITR_EL1] = read_sysreg_s(SYS_PMBLIMITR_EL1);
+   write_sysreg_s(0, SYS_PMBLIMITR_EL1);
+   isb();
+
+   ctxt->sys_regs[PMSICR_EL1] = read_sysreg_s(SYS_PMSICR_EL1);
+   ctxt->sys_regs[PMSIRR_EL1] = read_sysreg_s(SYS_PMSIRR_EL1);
+   ctxt->sys_regs[PMSFCR_EL1] = read_sysreg_s(SYS_PMSFCR_EL1);
+   ctxt->sys_regs[PMSEVFR_EL1] = read_sysreg_s(SYS_PMSEVFR_EL1);
+   ctxt->sys_regs[PMSLATFR_EL1] = read_sysreg_s(SYS_PMSLATFR_EL1);
+   ctxt->sys_regs[PMBPTR_EL1] = read_sysreg_s(SYS_PMBPTR_EL1);
+   ctxt->sys_regs[PMBSR_EL1] = read_sysreg_s(SYS_PMBSR_EL1);
 }
 
-static void __hyp_text __debug_restore_spe_nvhe(struct kvm_cpu_context *ctxt)
+static void __hyp_text
+__debug_restore_spe_nvhe(struct kvm_cpu_context *ctxt, bool full_ctxt)
 {
if (!ctxt->sys_regs[PMSCR_EL1])
return;
@@ -107,6 +123,16 @@ static void __hyp_text __debug_restore_spe_nvhe(struct 
kvm_cpu_context *ctxt)
isb();
 
/* Re-enable data generation */
+   if (full_ctxt) {
+   write_sysreg_s(ctxt->sys_regs[PMBPTR_EL1], SYS_PMBPTR_EL1);
+   write_sysreg_s(ctxt->sys_regs[PMBLIMITR_EL1], 
SYS_PMBLIMITR_EL1);
+   write_sysreg_s(ctxt->sys_regs[PMSFCR_EL1], SYS_PMSFCR_EL1);
+   write_sysreg_s(ctxt->sys_regs[PMSEVFR_EL1], SYS_PMSEVFR_EL1);
+   write_sysreg_s(ctxt->sys_regs[PMSLATFR_EL1], SYS_PMSLATFR_EL1);
+   write_sysreg_s(ctxt->sys_regs[PMSIRR_EL1], SYS_PMSIRR_EL1);
+   write_sysreg_s(ctxt->sys_regs[PMSICR_EL1], SYS_PMSICR_EL1);
+   write_sysreg_s(ctxt->sys_regs[PMBSR_EL1], SYS_PMBSR_EL1);
+   }
write_sysreg_el1_s(ctxt->sys_regs[PMSCR_EL1], SYS_PMSCR);
 }
 
@@ -179,7 +205,7 @@ void __hyp_text __debug_restore_host_context(struct 
kvm_vcpu *vcpu)
guest_ctxt = >arch.ctxt;
 
if (!has_vhe())
-   __debug_restore_spe_nvhe(host_ctxt);
+   __debug_restore_spe_nvhe(host_ctxt, false);
 
if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
return;
@@ -203,7 +229,7 @@ void __hyp_text __debug_save_host_context(struct kvm_vcpu 
*vcpu)
 
host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
if (!has_vhe())
-   __debug_save_spe_nvhe(host_ctxt);
+   __debug_save_spe_nvhe(host_ctxt, false);
 }
 
 void __hyp_text __debug_save_guest_context(struct kvm_vcpu *vcpu)
-- 
2.17.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH 00/13] arm64: KVM: add SPE profiling support for guest

2019-02-28 Thread Sudeep Holla
Hi,

This series implements support for allowing KVM guests to use the Arm
Statistical Profiling Extension (SPE).

The patches are also available on a branch[1]. The last two extra
patches are for the kvmtool if someone wants to play with it.

Regards,
Sudeep

[1] git://git.kernel.org/pub/scm/linux/kernel/git/sudeep.holla/linux.git kvm_spe

Sudeep Holla (13):
  KVM: arm64: add {read,write}_sysreg_elx_s versions for new registers
  dt-bindings: ARM SPE: highlight the need for PPI partitions on
heterogeneous systems
  arm64: KVM: reset E2PB correctly in MDCR_EL2 when exiting the
guest(VHE)
  arm64: KVM: define SPE data structure for each vcpu
  arm64: KVM: add access handler for SPE system registers
  arm64: KVM/VHE: enable the use PMSCR_EL12 on VHE systems
  arm64: KVM: split debug save restore across vm/traps activation
  arm64: KVM/debug: drop pmscr_el1 and use sys_regs[PMSCR_EL1] in
kvm_cpu_context
  arm64: KVM: add support to save/restore SPE profiling buffer controls
  arm64: KVM: enable conditional save/restore full SPE profiling buffer
controls
  arm64: KVM/debug: trap all accesses to SPE controls at EL1
  KVM: arm64: add a new vcpu device control group for SPEv1
  KVM: arm64: enable SPE support

 .../devicetree/bindings/arm/spe-pmu.txt   |   5 +-
 Documentation/virtual/kvm/devices/vcpu.txt|  28 +++
 arch/arm64/include/asm/kvm_host.h |  19 +-
 arch/arm64/include/asm/kvm_hyp.h  |  26 ++-
 arch/arm64/include/uapi/asm/kvm.h |   4 +
 arch/arm64/kvm/Kconfig|   7 +
 arch/arm64/kvm/Makefile   |   1 +
 arch/arm64/kvm/guest.c|   9 +
 arch/arm64/kvm/hyp/debug-sr.c |  98 +++
 arch/arm64/kvm/hyp/switch.c   |  18 +-
 arch/arm64/kvm/reset.c|   3 +
 arch/arm64/kvm/sys_regs.c |  35 
 include/kvm/arm_spe.h |  70 
 include/uapi/linux/kvm.h  |   1 +
 virt/kvm/arm/arm.c|   4 +
 virt/kvm/arm/spe.c| 163 ++
 16 files changed, 446 insertions(+), 45 deletions(-)
 create mode 100644 include/kvm/arm_spe.h
 create mode 100644 virt/kvm/arm/spe.c

-- 
2.17.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH 01/13] KVM: arm64: add {read, write}_sysreg_elx_s versions for new registers

2019-02-28 Thread Sudeep Holla
KVM provides {read,write}_sysreg_el1() to write to ${REG}_EL1 when we
really want to read/write to the EL1 register without any VHE register
redirection.

SPE registers are not supported by many versions of GAS. For this reason
we mostly use mrs_s macro which takes sys_reg() representation.

However these SPE registers using sys_reg representation doesn't work
well with existing {read,write}_sysreg_el1 macros. We need to add
{read,write}_sysreg_el1_s versions so cope up with them.

Signed-off-by: Sudeep Holla 
---
 arch/arm64/include/asm/kvm_hyp.h | 19 +++
 1 file changed, 19 insertions(+)

Hi,

There were alternatives proposed to this in the past[1]. I am happy
to that version and use it instead of this patch.

Regards,
Sudeep

[1] https://patchwork.kernel.org/patch/10435599/

diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index a80a7ef57325..68c49f258729 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -34,6 +34,15 @@
 : "=r" (reg)); \
reg;\
})
+#define read_sysreg_elx_s(r,nvh,vh)\
+   ({  \
+   u64 reg;\
+   asm volatile(ALTERNATIVE("mrs_s %0, " __stringify(r##nvh),\
+"mrs_s %0, " __stringify(r##vh),\
+ARM64_HAS_VIRT_HOST_EXTN)  \
+: "=r" (reg)); \
+   reg;\
+   })
 
 #define write_sysreg_elx(v,r,nvh,vh)   \
do {\
@@ -43,6 +52,14 @@
 ARM64_HAS_VIRT_HOST_EXTN)  \
 : : "rZ" (__val)); \
} while (0)
+#define write_sysreg_elx_s(v,r,nvh,vh) \
+   do {\
+   u64 __val = (u64)(v);   \
+   asm volatile(ALTERNATIVE("msr_s " __stringify(r##nvh) ", %x0",\
+"msr_s " __stringify(r##vh) ", %x0",\
+ARM64_HAS_VIRT_HOST_EXTN)  \
+: : "rZ" (__val)); \
+   } while (0)
 
 /*
  * Unified accessors for registers that have a different encoding
@@ -71,7 +88,9 @@
 #define read_sysreg_el0(r) read_sysreg_elx(r, _EL0, _EL02)
 #define write_sysreg_el0(v,r)  write_sysreg_elx(v, r, _EL0, _EL02)
 #define read_sysreg_el1(r) read_sysreg_elx(r, _EL1, _EL12)
+#define read_sysreg_el1_s(r)   read_sysreg_elx_s(r, _EL1, _EL12)
 #define write_sysreg_el1(v,r)  write_sysreg_elx(v, r, _EL1, _EL12)
+#define write_sysreg_el1_s(v,r)write_sysreg_elx_s(v, r, _EL1, _EL12)
 
 /* The VHE specific system registers and their encoding */
 #define sctlr_EL12  sys_reg(3, 5, 1, 0, 0)
-- 
2.17.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[KVMTOOL PATCH 14/15] kvm: update/sync the uapi headers from kernel(v5.0)

2019-02-28 Thread Sudeep Holla
Since the existing versions of the uapi headers in kvmtool are quite
outdated, let bring in the latest copy of the following uapi headers
from the kernel v5.0:
include/uapi/linux/kvm.h
arch/arm64/include/uapi/asm/kvm.h

Signed-off-by: Sudeep Holla 
---
 arm/aarch64/include/asm/kvm.h |  41 +-
 include/linux/kvm.h   | 248 +-
 2 files changed, 282 insertions(+), 7 deletions(-)

diff --git a/arm/aarch64/include/asm/kvm.h b/arm/aarch64/include/asm/kvm.h
index c2860358ae3e..97c3478ee6e7 100644
--- a/arm/aarch64/include/asm/kvm.h
+++ b/arm/aarch64/include/asm/kvm.h
@@ -1,3 +1,4 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
 /*
  * Copyright (C) 2012,2013 - ARM Ltd
  * Author: Marc Zyngier 
@@ -38,6 +39,9 @@
 #define __KVM_HAVE_GUEST_DEBUG
 #define __KVM_HAVE_IRQ_LINE
 #define __KVM_HAVE_READONLY_MEM
+#define __KVM_HAVE_VCPU_EVENTS
+
+#define KVM_COALESCED_MMIO_PAGE_OFFSET 1

 #define KVM_REG_SIZE(id)   \
(1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT))
@@ -88,6 +92,7 @@ struct kvm_regs {
 #define KVM_VGIC_V3_ADDR_TYPE_DIST 2
 #define KVM_VGIC_V3_ADDR_TYPE_REDIST   3
 #define KVM_VGIC_ITS_ADDR_TYPE 4
+#define KVM_VGIC_V3_ADDR_TYPE_REDIST_REGION5

 #define KVM_VGIC_V3_DIST_SIZE  SZ_64K
 #define KVM_VGIC_V3_REDIST_SIZE(2 * SZ_64K)
@@ -143,11 +148,25 @@ struct kvm_debug_exit_arch {
 #define KVM_GUESTDBG_USE_HW(1 << 17)

 struct kvm_sync_regs {
+   /* Used with KVM_CAP_ARM_USER_IRQ */
+   __u64 device_irq_level;
 };

 struct kvm_arch_memory_slot {
 };

+/* for KVM_GET/SET_VCPU_EVENTS */
+struct kvm_vcpu_events {
+   struct {
+   __u8 serror_pending;
+   __u8 serror_has_esr;
+   /* Align it to 8 bytes */
+   __u8 pad[6];
+   __u64 serror_esr;
+   } exception;
+   __u32 reserved[12];
+};
+
 /* If you need to interpret the index values, here is the key: */
 #define KVM_REG_ARM_COPROC_MASK0x0FFF
 #define KVM_REG_ARM_COPROC_SHIFT   16
@@ -191,10 +210,22 @@ struct kvm_arch_memory_slot {

 #define ARM64_SYS_REG(...) (__ARM64_SYS_REG(__VA_ARGS__) | KVM_REG_SIZE_U64)

+/* Physical Timer EL0 Registers */
+#define KVM_REG_ARM_PTIMER_CTL ARM64_SYS_REG(3, 3, 14, 2, 1)
+#define KVM_REG_ARM_PTIMER_CVALARM64_SYS_REG(3, 3, 14, 2, 2)
+#define KVM_REG_ARM_PTIMER_CNT ARM64_SYS_REG(3, 3, 14, 0, 1)
+
+/* EL0 Virtual Timer Registers */
 #define KVM_REG_ARM_TIMER_CTL  ARM64_SYS_REG(3, 3, 14, 3, 1)
 #define KVM_REG_ARM_TIMER_CNT  ARM64_SYS_REG(3, 3, 14, 3, 2)
 #define KVM_REG_ARM_TIMER_CVAL ARM64_SYS_REG(3, 3, 14, 0, 2)

+/* KVM-as-firmware specific pseudo-registers */
+#define KVM_REG_ARM_FW (0x0014 << KVM_REG_ARM_COPROC_SHIFT)
+#define KVM_REG_ARM_FW_REG(r)  (KVM_REG_ARM64 | KVM_REG_SIZE_U64 | \
+KVM_REG_ARM_FW | ((r) & 0x))
+#define KVM_REG_ARM_PSCI_VERSION   KVM_REG_ARM_FW_REG(0)
+
 /* Device Control API: ARM VGIC */
 #define KVM_DEV_ARM_VGIC_GRP_ADDR  0
 #define KVM_DEV_ARM_VGIC_GRP_DIST_REGS 1
@@ -212,18 +243,26 @@ struct kvm_arch_memory_slot {
 #define KVM_DEV_ARM_VGIC_GRP_REDIST_REGS 5
 #define KVM_DEV_ARM_VGIC_GRP_CPU_SYSREGS 6
 #define KVM_DEV_ARM_VGIC_GRP_LEVEL_INFO  7
+#define KVM_DEV_ARM_VGIC_GRP_ITS_REGS 8
 #define KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_SHIFT 10
 #define KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_MASK \
(0x3fULL << KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_SHIFT)
 #define KVM_DEV_ARM_VGIC_LINE_LEVEL_INTID_MASK 0x3ff
 #define VGIC_LEVEL_INFO_LINE_LEVEL 0

-#define   KVM_DEV_ARM_VGIC_CTRL_INIT   0
+#define   KVM_DEV_ARM_VGIC_CTRL_INIT   0
+#define   KVM_DEV_ARM_ITS_SAVE_TABLES   1
+#define   KVM_DEV_ARM_ITS_RESTORE_TABLES2
+#define   KVM_DEV_ARM_VGIC_SAVE_PENDING_TABLES 3
+#define   KVM_DEV_ARM_ITS_CTRL_RESET   4

 /* Device Control API on vcpu fd */
 #define KVM_ARM_VCPU_PMU_V3_CTRL   0
 #define   KVM_ARM_VCPU_PMU_V3_IRQ  0
 #define   KVM_ARM_VCPU_PMU_V3_INIT 1
+#define KVM_ARM_VCPU_TIMER_CTRL1
+#define   KVM_ARM_VCPU_TIMER_IRQ_VTIMER0
+#define   KVM_ARM_VCPU_TIMER_IRQ_PTIMER1

 /* KVM_IRQ_LINE irq field index values */
 #define KVM_ARM_IRQ_TYPE_SHIFT 24
diff --git a/include/linux/kvm.h b/include/linux/kvm.h
index f51d5082a377..6d4ea4b6c922 100644
--- a/include/linux/kvm.h
+++ b/include/linux/kvm.h
@@ -1,3 +1,4 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
 #ifndef __LINUX_KVM_H
 #define __LINUX_KVM_H

@@ -155,6 +156,35 @@ struct kvm_s390_skeys {
__u32 reserved[9];
 };

+#define KVM_S390_CMMA_PEEK (1 << 0)
+
+/**
+ * kvm_s390_cmma_log - Used for CMMA migration.
+ *
+ * Used both for input and output.
+ 

[PATCH 06/13] arm64: KVM/VHE: enable the use PMSCR_EL12 on VHE systems

2019-02-28 Thread Sudeep Holla
Currently, we are just using PMSCR_EL1 in the host for non VHE systems.
We already have the {read,write}_sysreg_el*() accessors for accessing
particular ELs' sysregs in the presence of VHE.

Lets just define PMSCR_EL12 and start making use of it here which will
access the right register on both VHE and non VHE systems. This change
is required to add SPE guest support on VHE systems.

Signed-off-by: Sudeep Holla 
---
 arch/arm64/include/asm/kvm_hyp.h | 1 +
 arch/arm64/kvm/hyp/debug-sr.c| 6 +++---
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 68c49f258729..a4a6f6deef89 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -102,6 +102,7 @@
 #define afsr1_EL12  sys_reg(3, 5, 5, 1, 1)
 #define esr_EL12sys_reg(3, 5, 5, 2, 0)
 #define far_EL12sys_reg(3, 5, 6, 0, 0)
+#define SYS_PMSCR_EL12  sys_reg(3, 5, 9, 9, 0)
 #define mair_EL12   sys_reg(3, 5, 10, 2, 0)
 #define amair_EL12  sys_reg(3, 5, 10, 3, 0)
 #define vbar_EL12   sys_reg(3, 5, 12, 0, 0)
diff --git a/arch/arm64/kvm/hyp/debug-sr.c b/arch/arm64/kvm/hyp/debug-sr.c
index 50009766e5e5..fa51236ebcb3 100644
--- a/arch/arm64/kvm/hyp/debug-sr.c
+++ b/arch/arm64/kvm/hyp/debug-sr.c
@@ -89,8 +89,8 @@ static void __hyp_text __debug_save_spe_nvhe(u64 *pmscr_el1)
return;
 
/* Yes; save the control register and disable data generation */
-   *pmscr_el1 = read_sysreg_s(SYS_PMSCR_EL1);
-   write_sysreg_s(0, SYS_PMSCR_EL1);
+   *pmscr_el1 = read_sysreg_el1_s(SYS_PMSCR);
+   write_sysreg_el1_s(0, SYS_PMSCR);
isb();
 
/* Now drain all buffered data to memory */
@@ -107,7 +107,7 @@ static void __hyp_text __debug_restore_spe_nvhe(u64 
pmscr_el1)
isb();
 
/* Re-enable data generation */
-   write_sysreg_s(pmscr_el1, SYS_PMSCR_EL1);
+   write_sysreg_el1_s(pmscr_el1, SYS_PMSCR);
 }
 
 static void __hyp_text __debug_save_state(struct kvm_vcpu *vcpu,
-- 
2.17.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH 08/13] arm64: KVM/debug: drop pmscr_el1 and use sys_regs[PMSCR_EL1] in kvm_cpu_context

2019-02-28 Thread Sudeep Holla
kvm_cpu_context now has support to stash the complete SPE buffer control
context. We no longer need the pmscr_el1 kvm_vcpu_arch and it can be
dropped.

Signed-off-by: Sudeep Holla 
---
 arch/arm64/include/asm/kvm_host.h |  2 --
 arch/arm64/kvm/hyp/debug-sr.c | 26 +++---
 2 files changed, 15 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index d113b8271a75..9a5b90dc8962 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -272,8 +272,6 @@ struct kvm_vcpu_arch {
struct {
/* {Break,watch}point registers */
struct kvm_guest_debug_arch regs;
-   /* Statistical profiling extension */
-   u64 pmscr_el1;
} host_debug_state;
 
/* VGIC state */
diff --git a/arch/arm64/kvm/hyp/debug-sr.c b/arch/arm64/kvm/hyp/debug-sr.c
index 618884df1dc4..a2714a5eb3e9 100644
--- a/arch/arm64/kvm/hyp/debug-sr.c
+++ b/arch/arm64/kvm/hyp/debug-sr.c
@@ -66,19 +66,19 @@
default:write_debug(ptr[0], reg, 0);\
}
 
-static void __hyp_text __debug_save_spe_nvhe(u64 *pmscr_el1)
+static void __hyp_text __debug_save_spe_nvhe(struct kvm_cpu_context *ctxt)
 {
u64 reg;
 
/* Clear pmscr in case of early return */
-   *pmscr_el1 = 0;
+   ctxt->sys_regs[PMSCR_EL1] = 0;
 
/* SPE present on this CPU? */
if (!cpuid_feature_extract_unsigned_field(read_sysreg(id_aa64dfr0_el1),
  ID_AA64DFR0_PMSVER_SHIFT))
return;
 
-   /* Yes; is it owned by EL3? */
+   /* Yes; is it owned by higher EL? */
reg = read_sysreg_s(SYS_PMBIDR_EL1);
if (reg & BIT(SYS_PMBIDR_EL1_P_SHIFT))
return;
@@ -89,7 +89,7 @@ static void __hyp_text __debug_save_spe_nvhe(u64 *pmscr_el1)
return;
 
/* Yes; save the control register and disable data generation */
-   *pmscr_el1 = read_sysreg_el1_s(SYS_PMSCR);
+   ctxt->sys_regs[PMSCR_EL1] = read_sysreg_el1_s(SYS_PMSCR);
write_sysreg_el1_s(0, SYS_PMSCR);
isb();
 
@@ -98,16 +98,16 @@ static void __hyp_text __debug_save_spe_nvhe(u64 *pmscr_el1)
dsb(nsh);
 }
 
-static void __hyp_text __debug_restore_spe_nvhe(u64 pmscr_el1)
+static void __hyp_text __debug_restore_spe_nvhe(struct kvm_cpu_context *ctxt)
 {
-   if (!pmscr_el1)
+   if (!ctxt->sys_regs[PMSCR_EL1])
return;
 
/* The host page table is installed, but not yet synchronised */
isb();
 
/* Re-enable data generation */
-   write_sysreg_el1_s(pmscr_el1, SYS_PMSCR);
+   write_sysreg_el1_s(ctxt->sys_regs[PMSCR_EL1], SYS_PMSCR);
 }
 
 static void __hyp_text __debug_save_state(struct kvm_vcpu *vcpu,
@@ -175,14 +175,15 @@ void __hyp_text __debug_restore_host_context(struct 
kvm_vcpu *vcpu)
struct kvm_guest_debug_arch *host_dbg;
struct kvm_guest_debug_arch *guest_dbg;
 
+   host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
+   guest_ctxt = >arch.ctxt;
+
if (!has_vhe())
-   __debug_restore_spe_nvhe(vcpu->arch.host_debug_state.pmscr_el1);
+   __debug_restore_spe_nvhe(host_ctxt);
 
if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
return;
 
-   host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
-   guest_ctxt = >arch.ctxt;
host_dbg = >arch.host_debug_state.regs;
guest_dbg = kern_hyp_va(vcpu->arch.debug_ptr);
 
@@ -198,8 +199,11 @@ void __hyp_text __debug_save_host_context(struct kvm_vcpu 
*vcpu)
 * Non-VHE: Disable and flush SPE data generation
 * VHE: The vcpu can run, but it can't hide.
 */
+   struct kvm_cpu_context *host_ctxt;
+
+   host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context);
if (!has_vhe())
-   __debug_save_spe_nvhe(>arch.host_debug_state.pmscr_el1);
+   __debug_save_spe_nvhe(host_ctxt);
 }
 
 void __hyp_text __debug_save_guest_context(struct kvm_vcpu *vcpu)
-- 
2.17.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH 11/13] arm64: KVM/debug: trap all accesses to SPE controls at EL1

2019-02-28 Thread Sudeep Holla
Now that we have all the save/restore mechanism in place, lets enable
trapping of accesses to SPE profiling buffer controls at EL1 to EL2.
This will also change the translation regime used by buffer from EL2
stage 1 to EL1 stage 1 on VHE systems.

Signed-off-by: Sudeep Holla 
---
 arch/arm64/kvm/hyp/switch.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 3054b8b8f037..8738e9438780 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -106,6 +106,7 @@ static void activate_traps_vhe(struct kvm_vcpu *vcpu)
 
write_sysreg(val, cpacr_el1);
 
+   write_sysreg(vcpu->arch.mdcr_el2 | 2 << MDCR_EL2_E2PB_SHIFT, mdcr_el2);
write_sysreg(kvm_get_hyp_vector(), vbar_el1);
 }
 NOKPROBE_SYMBOL(activate_traps_vhe);
@@ -123,6 +124,7 @@ static void __hyp_text __activate_traps_nvhe(struct 
kvm_vcpu *vcpu)
__activate_traps_fpsimd32(vcpu);
}
 
+   write_sysreg(vcpu->arch.mdcr_el2 | 2 << MDCR_EL2_E2PB_SHIFT, mdcr_el2);
write_sysreg(val, cptr_el2);
 }
 
-- 
2.17.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH 12/13] KVM: arm64: add a new vcpu device control group for SPEv1

2019-02-28 Thread Sudeep Holla
To configure the virtual SPEv1 overflow interrupt number, we use the
vcpu kvm_device ioctl, encapsulating the KVM_ARM_VCPU_SPE_V1_IRQ
attribute within the KVM_ARM_VCPU_SPE_V1_CTRL group.

After configuring the SPEv1, call the vcpu ioctl with attribute
KVM_ARM_VCPU_SPE_V1_INIT to initialize the SPEv1.

Signed-off-by: Sudeep Holla 
---
 Documentation/virtual/kvm/devices/vcpu.txt |  28 
 arch/arm64/include/asm/kvm_host.h  |   2 +-
 arch/arm64/include/uapi/asm/kvm.h  |   4 +
 arch/arm64/kvm/Makefile|   1 +
 arch/arm64/kvm/guest.c |   9 ++
 arch/arm64/kvm/reset.c |   3 +
 include/kvm/arm_spe.h  |  35 +
 include/uapi/linux/kvm.h   |   1 +
 virt/kvm/arm/spe.c | 163 +
 9 files changed, 245 insertions(+), 1 deletion(-)
 create mode 100644 virt/kvm/arm/spe.c

diff --git a/Documentation/virtual/kvm/devices/vcpu.txt 
b/Documentation/virtual/kvm/devices/vcpu.txt
index 2b5dab16c4f2..d1ece488aeee 100644
--- a/Documentation/virtual/kvm/devices/vcpu.txt
+++ b/Documentation/virtual/kvm/devices/vcpu.txt
@@ -60,3 +60,31 @@ time to use the number provided for a given timer, 
overwriting any previously
 configured values on other VCPUs.  Userspace should configure the interrupt
 numbers on at least one VCPU after creating all VCPUs and before running any
 VCPUs.
+
+3. GROUP: KVM_ARM_VCPU_SPE_V1_CTRL
+Architectures: ARM64
+
+1.1. ATTRIBUTE: KVM_ARM_VCPU_SPE_V1_IRQ
+Parameters: in kvm_device_attr.addr the address for SPE buffer overflow 
interrupt
+   is a pointer to an int
+Returns: -EBUSY: The SPE overflow interrupt is already set
+ -ENXIO: The overflow interrupt not set when attempting to get it
+ -ENODEV: SPEv1 not supported
+ -EINVAL: Invalid SPE overflow interrupt number supplied or
+  trying to set the IRQ number without using an in-kernel
+  irqchip.
+
+A value describing the SPEv1 (Statistical Profiling Extension v1) overflow
+interrupt number for this vcpu. This interrupt should be PPI and the interrupt
+type and number must be same for each vcpu.
+
+1.2 ATTRIBUTE: KVM_ARM_VCPU_SPE_V1_INIT
+Parameters: no additional parameter in kvm_device_attr.addr
+Returns: -ENODEV: SPEv1 not supported or GIC not initialized
+ -ENXIO: SPEv1 not properly configured or in-kernel irqchip not
+ configured as required prior to calling this attribute
+ -EBUSY: SPEv1 already initialized
+
+Request the initialization of the SPEv1.  If using the SPEv1 with an in-kernel
+virtual GIC implementation, this must be done after initializing the in-kernel
+irqchip.
diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 9a5b90dc8962..1c40eb29093e 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -44,7 +44,7 @@
 
 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
 
-#define KVM_VCPU_MAX_FEATURES 4
+#define KVM_VCPU_MAX_FEATURES 5
 
 #define KVM_REQ_SLEEP \
KVM_ARCH_REQ_FLAGS(0, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP)
diff --git a/arch/arm64/include/uapi/asm/kvm.h 
b/arch/arm64/include/uapi/asm/kvm.h
index 97c3478ee6e7..152f8b9d8c1a 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -102,6 +102,7 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_EL1_32BIT 1 /* CPU running a 32bit VM */
 #define KVM_ARM_VCPU_PSCI_0_2  2 /* CPU uses PSCI v0.2 */
 #define KVM_ARM_VCPU_PMU_V33 /* Support guest PMUv3 */
+#define KVM_ARM_VCPU_SPE_V14 /* Support guest SPEv1 */
 
 struct kvm_vcpu_init {
__u32 target;
@@ -263,6 +264,9 @@ struct kvm_vcpu_events {
 #define KVM_ARM_VCPU_TIMER_CTRL1
 #define   KVM_ARM_VCPU_TIMER_IRQ_VTIMER0
 #define   KVM_ARM_VCPU_TIMER_IRQ_PTIMER1
+#define KVM_ARM_VCPU_SPE_V1_CTRL   2
+#define   KVM_ARM_VCPU_SPE_V1_IRQ  0
+#define   KVM_ARM_VCPU_SPE_V1_INIT 1
 
 /* KVM_IRQ_LINE irq field index values */
 #define KVM_ARM_IRQ_TYPE_SHIFT 24
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 0f2a135ba15b..6c09fd1cb647 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -37,3 +37,4 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-debug.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/irqchip.o
 kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arch_timer.o
 kvm-$(CONFIG_KVM_ARM_PMU) += $(KVM)/arm/pmu.o
+kvm-$(CONFIG_KVM_ARM_SPE) += $(KVM)/arm/spe.o
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index dd436a50fce7..b92a540d7fdc 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -497,6 +497,9 @@ int kvm_arm_vcpu_arch_set_attr(struct kvm_vcpu *vcpu,
case KVM_ARM_VCPU_TIMER_CTRL:
ret = kvm_arm_timer_set_attr(vcpu, attr);
break;
+   case KVM_ARM_VCPU_SPE_V1_CTRL:
+   ret

[PATCH 07/13] arm64: KVM: split debug save restore across vm/traps activation

2019-02-28 Thread Sudeep Holla
If we enable profiling buffer controls at EL1 generate a trap exception
to EL2, it also changes profiling buffer to use EL1&0 stage 1 translation
regime in case of VHE. To support SPE both in the guest and host, we
need to first stop profiling and flush the profiling buffers before
we activate/switch vm or enable/disable the traps.

In prepartion to do that, lets split the debug save restore functionality
into 4 steps:
1. debug_save_host_context - saves the host context
2. debug_restore_guest_context - restore the guest context
3. debug_save_guest_context - saves the guest context
4. debug_restore_host_context - restores the host context

Lets rename existing __debug_switch_to_{host,guest} to make sure it's
aligned to the above and just add the place holders for new ones getting
added here as we need them to support SPE in guests.

Signed-off-by: Sudeep Holla 
---
 arch/arm64/include/asm/kvm_hyp.h |  6 --
 arch/arm64/kvm/hyp/debug-sr.c| 25 -
 arch/arm64/kvm/hyp/switch.c  | 12 
 3 files changed, 28 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index a4a6f6deef89..0deea8c75f77 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -163,8 +163,10 @@ void sysreg_restore_guest_state_vhe(struct kvm_cpu_context 
*ctxt);
 void __sysreg32_save_state(struct kvm_vcpu *vcpu);
 void __sysreg32_restore_state(struct kvm_vcpu *vcpu);
 
-void __debug_switch_to_guest(struct kvm_vcpu *vcpu);
-void __debug_switch_to_host(struct kvm_vcpu *vcpu);
+void __debug_save_host_context(struct kvm_vcpu *vcpu);
+void __debug_restore_guest_context(struct kvm_vcpu *vcpu);
+void __debug_save_guest_context(struct kvm_vcpu *vcpu);
+void __debug_restore_host_context(struct kvm_vcpu *vcpu);
 
 void __fpsimd_save_state(struct user_fpsimd_state *fp_regs);
 void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs);
diff --git a/arch/arm64/kvm/hyp/debug-sr.c b/arch/arm64/kvm/hyp/debug-sr.c
index fa51236ebcb3..618884df1dc4 100644
--- a/arch/arm64/kvm/hyp/debug-sr.c
+++ b/arch/arm64/kvm/hyp/debug-sr.c
@@ -149,20 +149,13 @@ static void __hyp_text __debug_restore_state(struct 
kvm_vcpu *vcpu,
write_sysreg(ctxt->sys_regs[MDCCINT_EL1], mdccint_el1);
 }
 
-void __hyp_text __debug_switch_to_guest(struct kvm_vcpu *vcpu)
+void __hyp_text __debug_restore_guest_context(struct kvm_vcpu *vcpu)
 {
struct kvm_cpu_context *host_ctxt;
struct kvm_cpu_context *guest_ctxt;
struct kvm_guest_debug_arch *host_dbg;
struct kvm_guest_debug_arch *guest_dbg;
 
-   /*
-* Non-VHE: Disable and flush SPE data generation
-* VHE: The vcpu can run, but it can't hide.
-*/
-   if (!has_vhe())
-   __debug_save_spe_nvhe(>arch.host_debug_state.pmscr_el1);
-
if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY))
return;
 
@@ -175,7 +168,7 @@ void __hyp_text __debug_switch_to_guest(struct kvm_vcpu 
*vcpu)
__debug_restore_state(vcpu, guest_dbg, guest_ctxt);
 }
 
-void __hyp_text __debug_switch_to_host(struct kvm_vcpu *vcpu)
+void __hyp_text __debug_restore_host_context(struct kvm_vcpu *vcpu)
 {
struct kvm_cpu_context *host_ctxt;
struct kvm_cpu_context *guest_ctxt;
@@ -199,6 +192,20 @@ void __hyp_text __debug_switch_to_host(struct kvm_vcpu 
*vcpu)
vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY;
 }
 
+void __hyp_text __debug_save_host_context(struct kvm_vcpu *vcpu)
+{
+   /*
+* Non-VHE: Disable and flush SPE data generation
+* VHE: The vcpu can run, but it can't hide.
+*/
+   if (!has_vhe())
+   __debug_save_spe_nvhe(>arch.host_debug_state.pmscr_el1);
+}
+
+void __hyp_text __debug_save_guest_context(struct kvm_vcpu *vcpu)
+{
+}
+
 u32 __hyp_text __kvm_get_mdcr_el2(void)
 {
return read_sysreg(mdcr_el2);
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 08e2a01188ac..3054b8b8f037 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -472,6 +472,7 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
guest_ctxt = >arch.ctxt;
 
sysreg_save_host_state_vhe(host_ctxt);
+   __debug_save_host_context(vcpu);
 
/*
 * ARM erratum 1165522 requires us to configure both stage 1 and
@@ -488,7 +489,7 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
__activate_traps(vcpu);
 
sysreg_restore_guest_state_vhe(guest_ctxt);
-   __debug_switch_to_guest(vcpu);
+   __debug_restore_guest_context(vcpu);
 
__set_guest_arch_workaround_state(vcpu);
 
@@ -502,6 +503,7 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
__set_host_arch_workaround_state(vcpu);
 
sysreg_save_guest_state_vhe(guest_ctxt);
+   __debug_save_guest_context(vcpu);
 
__deactivate_traps(vcpu);
 
@@ -510,7 +512,7 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu

[PATCH 13/13] KVM: arm64: enable SPE support

2019-02-28 Thread Sudeep Holla
We have all the bits and pieces to enable SPE for guest in place, so
lets enable it.

Signed-off-by: Sudeep Holla 
---
 virt/kvm/arm/arm.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 9c486fad3f9f..4f2672ea5afd 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -585,6 +585,10 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
return ret;
 
ret = kvm_arm_pmu_v3_enable(vcpu);
+   if (ret)
+   return ret;
+
+   ret = kvm_arm_spe_v1_enable(vcpu);
 
return ret;
 }
-- 
2.17.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH 03/13] arm64: KVM: reset E2PB correctly in MDCR_EL2 when exiting the guest(VHE)

2019-02-28 Thread Sudeep Holla
On VHE systems, the reset value for MDCR_EL2.E2PB=b00 which defaults
to profiling buffer using the EL2 stage 1 translations. However if the
guest are allowed to use profiling buffers changing E2PB settings, we
need to ensure we resume back MDCR_EL2.E2PB=b00. Currently we just
do bitwise '&' with MDCR_EL2_E2PB_MASK which will retain the value.

So fix it by clearing all the bits in E2PB.

Signed-off-by: Sudeep Holla 
---
 arch/arm64/kvm/hyp/switch.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index 421ebf6f7086..08e2a01188ac 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -198,9 +198,7 @@ void deactivate_traps_vhe_put(void)
 {
u64 mdcr_el2 = read_sysreg(mdcr_el2);
 
-   mdcr_el2 &= MDCR_EL2_HPMN_MASK |
-   MDCR_EL2_E2PB_MASK << MDCR_EL2_E2PB_SHIFT |
-   MDCR_EL2_TPMS;
+   mdcr_el2 &= MDCR_EL2_HPMN_MASK | MDCR_EL2_TPMS;
 
write_sysreg(mdcr_el2, mdcr_el2);
 
-- 
2.17.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH 05/13] arm64: KVM: add access handler for SPE system registers

2019-02-28 Thread Sudeep Holla
SPE Profiling Buffer owning EL is configurable and when MDCR_EL2.E2PB
is configured to provide buffer ownership to EL1, the control registers
are trapped.

Add access handlers for the Statistical Profiling Extension(SPE)
Profiling Buffer controls registers. This is need to support profiling
using SPE in the guests.

Signed-off-by: Sudeep Holla 
---
 arch/arm64/include/asm/kvm_host.h | 13 
 arch/arm64/kvm/sys_regs.c | 35 +++
 include/kvm/arm_spe.h | 14 +
 3 files changed, 62 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 6714d6a0ef1e..d113b8271a75 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -133,6 +133,19 @@ enum vcpu_sysreg {
MDCCINT_EL1,/* Monitor Debug Comms Channel Interrupt Enable Reg */
DISR_EL1,   /* Deferred Interrupt Status Register */
 
+   /* Statistical Profiling Extension Registers */
+
+   PMSCR_EL1,
+   PMSICR_EL1,
+   PMSIRR_EL1,
+   PMSFCR_EL1,
+   PMSEVFR_EL1,
+   PMSLATFR_EL1,
+   PMSIDR_EL1,
+   PMBLIMITR_EL1,
+   PMBPTR_EL1,
+   PMBSR_EL1,
+
/* Performance Monitors Registers */
PMCR_EL0,   /* Control Register */
PMSELR_EL0, /* Event Counter Selection Register */
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c936aa40c3f4..9a8571974df0 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -646,6 +646,30 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct 
sys_reg_desc *r)
__vcpu_sys_reg(vcpu, PMCR_EL0) = val;
 }
 
+static bool access_pmsb_val(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+   const struct sys_reg_desc *r)
+{
+   if (p->is_write)
+   vcpu_write_sys_reg(vcpu, p->regval, r->reg);
+   else
+   p->regval = vcpu_read_sys_reg(vcpu, r->reg);
+
+   return true;
+}
+
+static void reset_pmsb_val(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+   if (!kvm_arm_support_spe_v1()) {
+   __vcpu_sys_reg(vcpu, r->reg) = 0;
+   return;
+   }
+
+   if (r->reg == PMSIDR_EL1)
+   __vcpu_sys_reg(vcpu, r->reg) = read_sysreg_s(SYS_PMSIDR_EL1);
+   else
+   __vcpu_sys_reg(vcpu, r->reg) = 0;
+}
+
 static bool check_pmu_access_disabled(struct kvm_vcpu *vcpu, u64 flags)
 {
u64 reg = __vcpu_sys_reg(vcpu, PMUSERENR_EL0);
@@ -1344,6 +1368,17 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_FAR_EL1), access_vm_reg, reset_unknown, FAR_EL1 },
{ SYS_DESC(SYS_PAR_EL1), NULL, reset_unknown, PAR_EL1 },
 
+   { SYS_DESC(SYS_PMSCR_EL1), access_pmsb_val, reset_pmsb_val, PMSCR_EL1 },
+   { SYS_DESC(SYS_PMSICR_EL1), access_pmsb_val, reset_pmsb_val, PMSICR_EL1 
},
+   { SYS_DESC(SYS_PMSIRR_EL1), access_pmsb_val, reset_pmsb_val, PMSIRR_EL1 
},
+   { SYS_DESC(SYS_PMSFCR_EL1), access_pmsb_val, reset_pmsb_val, PMSFCR_EL1 
},
+   { SYS_DESC(SYS_PMSEVFR_EL1), access_pmsb_val, reset_pmsb_val, 
PMSEVFR_EL1},
+   { SYS_DESC(SYS_PMSLATFR_EL1), access_pmsb_val, reset_pmsb_val, 
PMSLATFR_EL1 },
+   { SYS_DESC(SYS_PMSIDR_EL1), access_pmsb_val, reset_pmsb_val, PMSIDR_EL1 
},
+   { SYS_DESC(SYS_PMBLIMITR_EL1), access_pmsb_val, reset_pmsb_val, 
PMBLIMITR_EL1 },
+   { SYS_DESC(SYS_PMBPTR_EL1), access_pmsb_val, reset_pmsb_val, PMBPTR_EL1 
},
+   { SYS_DESC(SYS_PMBSR_EL1), access_pmsb_val, reset_pmsb_val, PMBSR_EL1 },
+
{ SYS_DESC(SYS_PMINTENSET_EL1), access_pminten, reset_unknown, 
PMINTENSET_EL1 },
{ SYS_DESC(SYS_PMINTENCLR_EL1), access_pminten, NULL, PMINTENSET_EL1 },
 
diff --git a/include/kvm/arm_spe.h b/include/kvm/arm_spe.h
index 8c96bdfad6ac..5678f80e1528 100644
--- a/include/kvm/arm_spe.h
+++ b/include/kvm/arm_spe.h
@@ -8,6 +8,7 @@
 
 #include 
 #include 
+#include 
 
 struct kvm_spe {
int irq;
@@ -15,4 +16,17 @@ struct kvm_spe {
bool created; /* SPE KVM instance is created, may not be ready yet */
 };
 
+#ifdef CONFIG_KVM_ARM_SPE
+
+static inline bool kvm_arm_support_spe_v1(void)
+{
+   u64 dfr0 = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
+   return !!cpuid_feature_extract_unsigned_field(dfr0,
+ ID_AA64DFR0_PMSVER_SHIFT);
+}
+#else
+
+#define kvm_arm_support_spe_v1()   (false)
+#endif /* CONFIG_KVM_ARM_SPE */
+
 #endif /* __ASM_ARM_KVM_SPE_H */
-- 
2.17.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH][RESEND] arm64: kvm: reuse existing cache type/info related macros

2017-08-04 Thread Sudeep Holla
Hi Christoffer,

On 04/08/17 14:08, Christoffer Dall wrote:
> Hi Sudeep,
> 
> On Fri, Aug 04, 2017 at 01:53:57PM +0100, Sudeep Holla wrote:
>> We already have various macros related to cache type and bitfields in
>> CLIDR system register. We can replace some of the hardcoded values
>> here using those existing macros.
>>
>> This patch reuses those existing cache type/info related macros and
>> replaces the hardcorded values. It also removes some of the comments
>> that become trivial with the macro names.
>>
>> Cc: Catalin Marinas <catalin.mari...@arm.com>
>> Cc: Will Deacon <will.dea...@arm.com>
>> Cc: Christoffer Dall <christoffer.d...@linaro.org>
>> Cc: Marc Zyngier <marc.zyng...@arm.com>
>> Signed-off-by: Sudeep Holla <sudeep.ho...@arm.com>
>> ---
>>  arch/arm64/include/asm/cache.h |  7 +++
>>  arch/arm64/kernel/cacheinfo.c  |  7 ---
>>  arch/arm64/kvm/sys_regs.c  | 29 +++--
>>  3 files changed, 22 insertions(+), 21 deletions(-)
>>
>> Hi,
>>
>> I dropped the support for 64bit format CCSIDR after Will's commit 
>> a8d4636f96ad
>> ("arm64: cacheinfo: Remove CCSIDR-based cache information probing"). However
>> I forgot to follow up on this patch which can be still applied. So just
>> reposting again rebasing on v4.13-rc3 as mentioned by Will as it was too
>> late for last cycle. Christoffer was fine with the changes but has not
>> given an official ACK.
>>
> 
> Reviewed-by: Christoffer Dall <cd...@linaro.org>
> 
Thanks for the quick response and review tag.

-- 
Regards,
Sudeep
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH][RESEND] arm64: kvm: reuse existing cache type/info related macros

2017-08-04 Thread Sudeep Holla
We already have various macros related to cache type and bitfields in
CLIDR system register. We can replace some of the hardcoded values
here using those existing macros.

This patch reuses those existing cache type/info related macros and
replaces the hardcorded values. It also removes some of the comments
that become trivial with the macro names.

Cc: Catalin Marinas <catalin.mari...@arm.com>
Cc: Will Deacon <will.dea...@arm.com>
Cc: Christoffer Dall <christoffer.d...@linaro.org>
Cc: Marc Zyngier <marc.zyng...@arm.com>
Signed-off-by: Sudeep Holla <sudeep.ho...@arm.com>
---
 arch/arm64/include/asm/cache.h |  7 +++
 arch/arm64/kernel/cacheinfo.c  |  7 ---
 arch/arm64/kvm/sys_regs.c  | 29 +++--
 3 files changed, 22 insertions(+), 21 deletions(-)

Hi,

I dropped the support for 64bit format CCSIDR after Will's commit a8d4636f96ad
("arm64: cacheinfo: Remove CCSIDR-based cache information probing"). However
I forgot to follow up on this patch which can be still applied. So just
reposting again rebasing on v4.13-rc3 as mentioned by Will as it was too
late for last cycle. Christoffer was fine with the changes but has not
given an official ACK.

Regards,
Sudeep

diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h
index ea9bb4e0e9bb..70fd4357ed38 100644
--- a/arch/arm64/include/asm/cache.h
+++ b/arch/arm64/include/asm/cache.h
@@ -49,6 +49,13 @@
 #define ICACHEF_VPIPT  1
 extern unsigned long __icache_flags;

+#define MAX_CACHE_LEVEL7   /* Max 7 level 
supported */
+/* Ctypen, bits[3(n - 1) + 2 : 3(n - 1)], for n = 1 to 7 */
+#define CLIDR_CTYPE_SHIFT(level)   (3 * (level - 1))
+#define CLIDR_CTYPE_MASK(level)(7 << CLIDR_CTYPE_SHIFT(level))
+#define CLIDR_CTYPE(clidr, level)  \
+   (((clidr) & CLIDR_CTYPE_MASK(level)) >> CLIDR_CTYPE_SHIFT(level))
+
 /*
  * Whilst the D-side always behaves as PIPT on AArch64, aliasing is
  * permitted in the I-cache.
diff --git a/arch/arm64/kernel/cacheinfo.c b/arch/arm64/kernel/cacheinfo.c
index 380f2e2fbed5..4798aa4bc17b 100644
--- a/arch/arm64/kernel/cacheinfo.c
+++ b/arch/arm64/kernel/cacheinfo.c
@@ -20,13 +20,6 @@
 #include 
 #include 

-#define MAX_CACHE_LEVEL7   /* Max 7 level 
supported */
-/* Ctypen, bits[3(n - 1) + 2 : 3(n - 1)], for n = 1 to 7 */
-#define CLIDR_CTYPE_SHIFT(level)   (3 * (level - 1))
-#define CLIDR_CTYPE_MASK(level)(7 << CLIDR_CTYPE_SHIFT(level))
-#define CLIDR_CTYPE(clidr, level)  \
-   (((clidr) & CLIDR_CTYPE_MASK(level)) >> CLIDR_CTYPE_SHIFT(level))
-
 static inline enum cache_type get_cache_type(int level)
 {
u64 clidr;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 77862881ae86..5601f77d1e1e 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -21,11 +21,13 @@
  */

 #include 
+#include 
 #include 
 #include 
 #include 

 #include 
+#include 
 #include 
 #include 
 #include 
@@ -79,7 +81,7 @@ static bool write_to_read_only(struct kvm_vcpu *vcpu,
 static u32 cache_levels;

 /* CSSELR values; used to index KVM_REG_ARM_DEMUX_ID_CCSIDR */
-#define CSSELR_MAX 12
+#define CSSELR_MAX ((MAX_CACHE_LEVEL - 1) << 1)

 /* Which cache CCSIDR represents depends on CSSELR value. */
 static u32 get_ccsidr(u32 csselr)
@@ -1913,19 +1915,18 @@ static bool is_valid_cache(u32 val)
return false;

/* Bottom bit is Instruction or Data bit.  Next 3 bits are level. */
-   level = (val >> 1);
-   ctype = (cache_levels >> (level * 3)) & 7;
+   level = (val >> 1) + 1;
+   ctype = CLIDR_CTYPE(cache_levels, level);

switch (ctype) {
-   case 0: /* No cache */
-   return false;
-   case 1: /* Instruction cache only */
-   return (val & 1);
-   case 2: /* Data cache only */
-   case 4: /* Unified cache */
-   return !(val & 1);
-   case 3: /* Separate instruction and data caches */
+   case CACHE_TYPE_INST:
+   return (val & CACHE_TYPE_INST);
+   case CACHE_TYPE_DATA:
+   case CACHE_TYPE_UNIFIED:
+   return !(val & CACHE_TYPE_INST);
+   case CACHE_TYPE_SEPARATE:
return true;
+   case CACHE_TYPE_NOCACHE:
default: /* Reserved: we can't know instruction or data. */
return false;
}
@@ -2192,11 +2193,11 @@ void kvm_sys_reg_table_init(void)
 */
get_clidr_el1(NULL, ); /* Ugly... */
cache_levels = clidr.val;
-   for (i = 0; i < 7; i++)
-   if (((cache_levels >> (i*3)) & 7) == 0)
+   for (i = 1; i <= MAX_CACHE_LEVEL; i++)
+   if (CLIDR_CTYPE(cache_levels, i) == CACHE_TYPE_NOCACHE)
break;
/* Clear al

Re: [PATCH] arm64: kvm: reuse existing cache type/info related macros

2017-07-19 Thread Sudeep Holla


On 19/07/17 10:01, Christoffer Dall wrote:
> On Thu, Jun 29, 2017 at 07:06:44PM +0100, Sudeep Holla wrote:
>> We already have various macros related to cache type and bitfields in
>> CLIDR system register. We can replace some of the hardcoded values
>> here using those existing macros.
>>
>> This patch reuses those existing cache type/info related macros and
>> replaces the hardcorded values. It also removes some of the comments
>> that become trivial with the macro names.
>>
>> Cc: Catalin Marinas <catalin.mari...@arm.com>
>> Cc: Will Deacon <will.dea...@arm.com>
>> Cc: Christoffer Dall <christoffer.d...@linaro.org>
>> Cc: Marc Zyngier <marc.zyng...@arm.com>
>> Signed-off-by: Sudeep Holla <sudeep.ho...@arm.com>
>> ---
>>  arch/arm64/include/asm/cache.h |  7 +++
>>  arch/arm64/kernel/cacheinfo.c  |  7 ---
>>  arch/arm64/kvm/sys_regs.c  | 29 +++--
>>  3 files changed, 22 insertions(+), 21 deletions(-)
>>

[...]
>>  /* Which cache CCSIDR represents depends on CSSELR value. */
>>  static u32 get_ccsidr(u32 csselr)
>> @@ -1894,19 +1896,18 @@ static bool is_valid_cache(u32 val)
>>  return false;
>>  
>>  /* Bottom bit is Instruction or Data bit.  Next 3 bits are level. */
>> -level = (val >> 1);
>> -ctype = (cache_levels >> (level * 3)) & 7;
>> +level = (val >> 1) + 1;
>> +ctype = CLIDR_CTYPE(cache_levels, level);
>>  
>>  switch (ctype) {
>> -case 0: /* No cache */
>> -return false;
>> -case 1: /* Instruction cache only */
>> -return (val & 1);
>> -case 2: /* Data cache only */
>> -case 4: /* Unified cache */
>> -return !(val & 1);
>> -case 3: /* Separate instruction and data caches */
>> +case CACHE_TYPE_INST:
>> +return (val & CACHE_TYPE_INST);
>> +case CACHE_TYPE_DATA:
>> +case CACHE_TYPE_UNIFIED:
>> +return !(val & CACHE_TYPE_INST);
>> +case CACHE_TYPE_SEPARATE:
>>  return true;
>> +case CACHE_TYPE_NOCACHE:
>>  default: /* Reserved: we can't know instruction or data. */
>>  return false;
>>  }
> 
> These defines seem to be arch-generic concepts defined in
> include/linux/cacheinfo.h.  Are they guaranteed to not change and
> therefore always match the format of ARM registers?
> 

Indeed, I have added the mapping code for other architectures which
doesn't match. We can do the same when ARM deviates :)

> Otherwise, this looks good to me.
>

Thanks, Will/Marc had requested to repost after -rc1, will do that ASAP.

-- 
Regards,
Sudeep
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH] arm64: kvm: reuse existing cache type/info related macros

2017-06-29 Thread Sudeep Holla
We already have various macros related to cache type and bitfields in
CLIDR system register. We can replace some of the hardcoded values
here using those existing macros.

This patch reuses those existing cache type/info related macros and
replaces the hardcorded values. It also removes some of the comments
that become trivial with the macro names.

Cc: Catalin Marinas <catalin.mari...@arm.com>
Cc: Will Deacon <will.dea...@arm.com>
Cc: Christoffer Dall <christoffer.d...@linaro.org>
Cc: Marc Zyngier <marc.zyng...@arm.com>
Signed-off-by: Sudeep Holla <sudeep.ho...@arm.com>
---
 arch/arm64/include/asm/cache.h |  7 +++
 arch/arm64/kernel/cacheinfo.c  |  7 ---
 arch/arm64/kvm/sys_regs.c  | 29 +++--
 3 files changed, 22 insertions(+), 21 deletions(-)

Hi,

I dropped the support for 64bit format CCSIDR after Will's commit a8d4636f96ad
("arm64: cacheinfo: Remove CCSIDR-based cache information probing"). However
I forgot to follow up on this patch which can be still applied. So just
reposting it.

Regards,
Sudeep

diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h
index ea9bb4e0e9bb..70fd4357ed38 100644
--- a/arch/arm64/include/asm/cache.h
+++ b/arch/arm64/include/asm/cache.h
@@ -49,6 +49,13 @@
 #define ICACHEF_VPIPT  1
 extern unsigned long __icache_flags;
 
+#define MAX_CACHE_LEVEL7   /* Max 7 level 
supported */
+/* Ctypen, bits[3(n - 1) + 2 : 3(n - 1)], for n = 1 to 7 */
+#define CLIDR_CTYPE_SHIFT(level)   (3 * (level - 1))
+#define CLIDR_CTYPE_MASK(level)(7 << CLIDR_CTYPE_SHIFT(level))
+#define CLIDR_CTYPE(clidr, level)  \
+   (((clidr) & CLIDR_CTYPE_MASK(level)) >> CLIDR_CTYPE_SHIFT(level))
+
 /*
  * Whilst the D-side always behaves as PIPT on AArch64, aliasing is
  * permitted in the I-cache.
diff --git a/arch/arm64/kernel/cacheinfo.c b/arch/arm64/kernel/cacheinfo.c
index 380f2e2fbed5..4798aa4bc17b 100644
--- a/arch/arm64/kernel/cacheinfo.c
+++ b/arch/arm64/kernel/cacheinfo.c
@@ -20,13 +20,6 @@
 #include 
 #include 
 
-#define MAX_CACHE_LEVEL7   /* Max 7 level 
supported */
-/* Ctypen, bits[3(n - 1) + 2 : 3(n - 1)], for n = 1 to 7 */
-#define CLIDR_CTYPE_SHIFT(level)   (3 * (level - 1))
-#define CLIDR_CTYPE_MASK(level)(7 << CLIDR_CTYPE_SHIFT(level))
-#define CLIDR_CTYPE(clidr, level)  \
-   (((clidr) & CLIDR_CTYPE_MASK(level)) >> CLIDR_CTYPE_SHIFT(level))
-
 static inline enum cache_type get_cache_type(int level)
 {
u64 clidr;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 0fe27024a2e1..e4107047f405 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -21,11 +21,13 @@
  */
 
 #include 
+#include 
 #include 
 #include 
 #include 
 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -68,7 +70,7 @@ static bool read_from_write_only(struct kvm_vcpu *vcpu,
 static u32 cache_levels;
 
 /* CSSELR values; used to index KVM_REG_ARM_DEMUX_ID_CCSIDR */
-#define CSSELR_MAX 12
+#define CSSELR_MAX ((MAX_CACHE_LEVEL - 1) << 1)
 
 /* Which cache CCSIDR represents depends on CSSELR value. */
 static u32 get_ccsidr(u32 csselr)
@@ -1894,19 +1896,18 @@ static bool is_valid_cache(u32 val)
return false;
 
/* Bottom bit is Instruction or Data bit.  Next 3 bits are level. */
-   level = (val >> 1);
-   ctype = (cache_levels >> (level * 3)) & 7;
+   level = (val >> 1) + 1;
+   ctype = CLIDR_CTYPE(cache_levels, level);
 
switch (ctype) {
-   case 0: /* No cache */
-   return false;
-   case 1: /* Instruction cache only */
-   return (val & 1);
-   case 2: /* Data cache only */
-   case 4: /* Unified cache */
-   return !(val & 1);
-   case 3: /* Separate instruction and data caches */
+   case CACHE_TYPE_INST:
+   return (val & CACHE_TYPE_INST);
+   case CACHE_TYPE_DATA:
+   case CACHE_TYPE_UNIFIED:
+   return !(val & CACHE_TYPE_INST);
+   case CACHE_TYPE_SEPARATE:
return true;
+   case CACHE_TYPE_NOCACHE:
default: /* Reserved: we can't know instruction or data. */
return false;
}
@@ -2173,11 +2174,11 @@ void kvm_sys_reg_table_init(void)
 */
get_clidr_el1(NULL, ); /* Ugly... */
cache_levels = clidr.val;
-   for (i = 0; i < 7; i++)
-   if (((cache_levels >> (i*3)) & 7) == 0)
+   for (i = 1; i <= MAX_CACHE_LEVEL; i++)
+   if (CLIDR_CTYPE(cache_levels, i) == CACHE_TYPE_NOCACHE)
break;
/* Clear all higher bits. */
-   cache_levels &= (1 << (i*3))-1;
+   cache_levels &= (1 << CLIDR_CTYPE_SHIFT(i)) - 1;
 }
 
 /**
-- 
2.7.4

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: Vexpress TC2 no longer booting on v4.12-rc1

2017-06-05 Thread Sudeep Holla


On 02/06/17 23:16, Mathieu Poirier wrote:
> Good afternoon Russell and friends,
> 
> I noticed that my vexpress-TC2 platform stopped booting when moving to
> kernel v4.12-rc1 (same with -rc2 and 3).  The last time things worked
> properly was on v4.11.  I did a bisect between v4.11 and v4.12-rc1 and
> ended up on [1], hence this email.
> 
> Since CONFIG_ARM_VIRT_EXT is selected by  default I removed the
> "#ifdef CONFIG_ARM_VIRT_EXT" section in the last hunk of the patch and
> the system sprung up to life again.
> 
> Compiler: arm-linux-gnueabi-gcc (Ubuntu/Linaro 5.4.0-6ubuntu1~16.04.4)
> 5.4.0 20160609
> Kernel command line: console=ttyAMA0,38400 loglevel=8 root=/dev/sda2 rootwait
> U-boot:
>  ## Flattened Device Tree blob at 8200
>Booting using the fdt blob at 0x8200
>Loading Ramdisk to 9fcea000, end 9feea6d1 ... OK
>Loading Device Tree to 9fce2000, end 9fce9ad8 ... OK
> 
> I'm not sure what else you need at this time - simply get back to me
> with what I'm missing and I'll be happy to provided.  I'm also
> offering to test patches.
> 

Fixed[1] and must be queued in rmk's tree[2].

-- 
Regards,
Sudeep

[1] https://www.spinics.net/lists/arm-kernel/msg581877.html
[2] http://www.armlinux.org.uk/developer/patches/viewpatch.php?id=8675
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v2 3/3] arm64: kvm: add support for the extended 64bit ccsidr

2017-02-03 Thread Sudeep Holla
Hi Marc,

On 01/02/17 16:02, Marc Zyngier wrote:
[...]

> 
> I'm a bit worried about this patch. If we snapshot a VM on a 32bit
> CCSIDR system, and restore it on a 64bit CSSIDR system (or the reverse),
> what happens? My hunch is that we cannot restore the VM properly.
>

I agree. I had a look at QEMU as you suggested offline. Looks like QEMU
emulate these registers with predefined values for each core type. Also
it looks like it's not using the existing DEMUX_ID_CCSIDR

> Now, I'm questioning the need for having those altogether, as we do a
> lot of work to prevent the guest from actually using that geometry (and
> on a big-little system, this hardly works).
> 

If we can conclude there are no users for DEMUX_ID_CCSIDR, we can remove
it all together instead of introducing new one for 64bit.

Are there any other users of these interface provided by KVM after from
kvmtool and qemu ?

-- 
Regards,
Sudeep
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v2 1/3] arm64: cacheinfo: add support for alternative format of CCSIDR_EL1

2017-01-30 Thread Sudeep Holla


On 30/01/17 16:47, Suzuki K Poulose wrote:
> On 30/01/17 16:25, Sudeep Holla wrote:
>> The number of sets described for each cache level in the CCSIDR is
>> limited to 32K and the associativity is limited to 1024 ways.
>>
>> As part of the ARM8.3 extensions, an alternative format for the
>> CCSIDR_EL1 is introduced for AArch64, and for AArch32, a new CCSIDR2
>> register is introduced to hold the upper 32 bits of this information,
>> and the CCSIDR register format is changed. An identification registers
>> are also added to identify the presence for this functionality.
>>
>> This patch adds support for the alternative format of CCSIDR_EL1.

[...]

>>  void __init setup_cpu_features(void);
>>
>>  void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
>> @@ -224,6 +229,10 @@ static inline bool system_uses_ttbr0_pan(void)
>>  !cpus_have_cap(ARM64_HAS_PAN);
>>  }
>>
>> +static inline bool cpu_supports_ccsidr_64b_format(void)
>> +{
>> +return
>> id_aa64mmfr2_ccsidr_64b_format(read_system_reg(SYS_ID_AA64MMFR2_EL1));
> 
> read_system_reg() gives you the system wide safe value for the register,
> which could
> be different from that of the current CPU. You have to use
> read_sysreg_s(), to read
> a register on the CPU (which aren't yet recognized by GAS).
> 
> Sorry, the names are a bit confusing, which can easily cause such
> issues. May be we should rename some of them.
> 

Thanks, fixed locally, will post as part of next version.

>> +}
>>  #endif /* __ASSEMBLY__ */
>>
>>  #endif
>> diff --git a/arch/arm64/include/asm/sysreg.h
>> b/arch/arm64/include/asm/sysreg.h
>> index 98ae03f8eedd..c72dfe8807ca 100644
>> --- a/arch/arm64/include/asm/sysreg.h
>> +++ b/arch/arm64/include/asm/sysreg.h
>> @@ -183,6 +183,7 @@
>>  #define ID_AA64MMFR1_VMIDBITS_162
>>
>>  /* id_aa64mmfr2 */
>> +#define ID_AA64MMFR2_CCIDX_SHIFT20
>>  #define ID_AA64MMFR2_LVA_SHIFT16
>>  #define ID_AA64MMFR2_IESB_SHIFT12
>>  #define ID_AA64MMFR2_LSM_SHIFT8
>> diff --git a/arch/arm64/kernel/cacheinfo.c
>> b/arch/arm64/kernel/cacheinfo.c
>> index 3f2250fc391b..888b38f1709f 100644
>> --- a/arch/arm64/kernel/cacheinfo.c
>> +++ b/arch/arm64/kernel/cacheinfo.c
>> @@ -43,6 +43,13 @@ static inline enum cache_type get_cache_type(int
>> level)
>>  return CLIDR_CTYPE(clidr, level);
>>  }
>>
>> +int icache_get_numsets(void)
> 
> Could this be static ? I could not see it used anywhere else outside
> this file.

Used in arch/arm64/kernel/cpuinfo.c

-- 
Regards,
Sudeep
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH v2 3/3] arm64: kvm: add support for the extended 64bit ccsidr

2017-01-30 Thread Sudeep Holla
csselr and ccsidr are treated as 64-bit values already elsewhere in the
kernel. It also aligns well with the architecture extensions that allow
64-bit format for ccsidr.

This patch upgrades the existing accesses to csselr and ccsidr from
32-bit to 64-bit in preparation to add support to those extensions.
It also add dedicated KVM_REG_ARM_DEMUX_ID_EXT_CCSIDR demux register
to handle 64-bit ccsidr in KVM.

Cc: Christoffer Dall <christoffer.d...@linaro.org>
Cc: Marc Zyngier <marc.zyng...@arm.com>
Signed-off-by: Sudeep Holla <sudeep.ho...@arm.com>
---
 arch/arm64/include/uapi/asm/kvm.h |   1 +
 arch/arm64/kvm/sys_regs.c | 104 --
 2 files changed, 77 insertions(+), 28 deletions(-)

v1->v2:
- Added dependency on cpu_supports_ccsidr_64b_format(PATCH 1/3)
- Added a new KVM_REG_ARM_DEMUX_ID_EXT_CCSIDR demux register id
  to support new 64bit CCSIDR

diff --git a/arch/arm64/include/uapi/asm/kvm.h 
b/arch/arm64/include/uapi/asm/kvm.h
index 3051f86a9b5f..8aa18e65e6a5 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -161,6 +161,7 @@ struct kvm_arch_memory_slot {
 #define KVM_REG_ARM_DEMUX_ID_MASK  0xFF00
 #define KVM_REG_ARM_DEMUX_ID_SHIFT 8
 #define KVM_REG_ARM_DEMUX_ID_CCSIDR(0x00 << KVM_REG_ARM_DEMUX_ID_SHIFT)
+#define KVM_REG_ARM_DEMUX_ID_EXT_CCSIDR(0x01 << 
KVM_REG_ARM_DEMUX_ID_SHIFT)
 #define KVM_REG_ARM_DEMUX_VAL_MASK 0x00FF
 #define KVM_REG_ARM_DEMUX_VAL_SHIFT0

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 72656743b4cc..f9822ac6d9ab 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -58,15 +58,15 @@
  */

 /* 3 bits per cache level, as per CLIDR, but non-existent caches always 0 */
-static u32 cache_levels;
+static u64 cache_levels;

-/* CSSELR values; used to index KVM_REG_ARM_DEMUX_ID_CCSIDR */
+/* CSSELR values; used to index KVM_REG_ARM_DEMUX_ID_{EXT_,}CCSIDR */
 #define CSSELR_MAX ((MAX_CACHE_LEVEL - 1) << 1)

 /* Which cache CCSIDR represents depends on CSSELR value. */
-static u32 get_ccsidr(u32 csselr)
+static u64 get_ccsidr(u64 csselr)
 {
-   u32 ccsidr;
+   u64 ccsidr;

/* Make sure noone else changes CSSELR during this! */
local_irq_disable();
@@ -1952,9 +1952,9 @@ static int set_invariant_sys_reg(u64 id, void __user 
*uaddr)
return 0;
 }

-static bool is_valid_cache(u32 val)
+static bool is_valid_cache(u64 val)
 {
-   u32 level, ctype;
+   u64 level, ctype;

if (val >= CSSELR_MAX)
return false;
@@ -1977,10 +1977,28 @@ static bool is_valid_cache(u32 val)
}
 }

+static int demux_ccsidr_validate_get(u64 id, int size, u64 *val)
+{
+   u64 cidx;
+
+   if (KVM_REG_SIZE(id) != size)
+   return -ENOENT;
+
+   cidx = (id & KVM_REG_ARM_DEMUX_VAL_MASK)
+   >> KVM_REG_ARM_DEMUX_VAL_SHIFT;
+   if (!is_valid_cache(cidx))
+   return -ENOENT;
+
+   *val = get_ccsidr(cidx);
+   return 0;
+}
+
 static int demux_c15_get(u64 id, void __user *uaddr)
 {
-   u32 val;
-   u32 __user *uval = uaddr;
+   int ret;
+   u64 val;
+   u32 __user *uval;
+   u64 __user *uval64;

/* Fail if we have unknown bits set. */
if (id & ~(KVM_REG_ARCH_MASK|KVM_REG_SIZE_MASK|KVM_REG_ARM_COPROC_MASK
@@ -1989,14 +2007,17 @@ static int demux_c15_get(u64 id, void __user *uaddr)

switch (id & KVM_REG_ARM_DEMUX_ID_MASK) {
case KVM_REG_ARM_DEMUX_ID_CCSIDR:
-   if (KVM_REG_SIZE(id) != 4)
-   return -ENOENT;
-   val = (id & KVM_REG_ARM_DEMUX_VAL_MASK)
-   >> KVM_REG_ARM_DEMUX_VAL_SHIFT;
-   if (!is_valid_cache(val))
-   return -ENOENT;
-
-   return put_user(get_ccsidr(val), uval);
+   ret = demux_ccsidr_validate_get(id, sizeof(*uval), );
+   if (ret)
+   return ret;
+   uval = uaddr;
+   return put_user(val, uval);
+   case KVM_REG_ARM_DEMUX_ID_EXT_CCSIDR:
+   ret = demux_ccsidr_validate_get(id, sizeof(*uval64), );
+   if (ret)
+   return ret;
+   uval64 = uaddr;
+   return put_user(val, uval64);
default:
return -ENOENT;
}
@@ -2004,8 +2025,10 @@ static int demux_c15_get(u64 id, void __user *uaddr)

 static int demux_c15_set(u64 id, void __user *uaddr)
 {
-   u32 val, newval;
-   u32 __user *uval = uaddr;
+   int ret;
+   u64 val, newval;
+   u32 __user *uval;
+   u64 __user *uval64;

/* Fail if we have unknown bits set. */
if (id & ~(KVM_REG_ARCH_MASK|KVM_REG_SIZE_MASK|KVM_REG_ARM_COPROC_MASK
@@ -2014,18 +2037,29 @@ static int demux_c15_set(u64 id, void __us

[PATCH v2 1/3] arm64: cacheinfo: add support for alternative format of CCSIDR_EL1

2017-01-30 Thread Sudeep Holla
The number of sets described for each cache level in the CCSIDR is
limited to 32K and the associativity is limited to 1024 ways.

As part of the ARM8.3 extensions, an alternative format for the
CCSIDR_EL1 is introduced for AArch64, and for AArch32, a new CCSIDR2
register is introduced to hold the upper 32 bits of this information,
and the CCSIDR register format is changed. An identification registers
are also added to identify the presence for this functionality.

This patch adds support for the alternative format of CCSIDR_EL1.

Cc: Catalin Marinas <catalin.mari...@arm.com>
Cc: Will Deacon <will.dea...@arm.com>
Cc: Suzuki K Poulose <suzuki.poul...@arm.com>
Cc: Mark Rutland <mark.rutl...@arm.com>
Signed-off-by: Sudeep Holla <sudeep.ho...@arm.com>
---
 arch/arm64/include/asm/cachetype.h  | 58 +++--
 arch/arm64/include/asm/cpufeature.h |  9 ++
 arch/arm64/include/asm/sysreg.h |  1 +
 arch/arm64/kernel/cacheinfo.c   | 26 -
 arch/arm64/kernel/cpufeature.c  |  1 +
 5 files changed, 67 insertions(+), 28 deletions(-)

diff --git a/arch/arm64/include/asm/cachetype.h 
b/arch/arm64/include/asm/cachetype.h
index f5588692f1d4..180b3288aa3f 100644
--- a/arch/arm64/include/asm/cachetype.h
+++ b/arch/arm64/include/asm/cachetype.h
@@ -40,9 +40,17 @@
 extern unsigned long __icache_flags;

 /*
- * NumSets, bits[27:13] - (Number of sets in cache) - 1
- * Associativity, bits[12:3] - (Associativity of cache) - 1
- * LineSize, bits[2:0] - (Log2(Number of words in cache line)) - 2
+ * +---+-+-+--+
+ * | Property  |   v1|v2   | Calculation  |
+ * +---+-+-+--+
+ * | Attributes| [31:28] |NA   |  |
+ * +---+-+-+--+
+ * | NumSets   | [27:13] | [55:32] | Number of sets in cache - 1  |
+ * +---+-+-+--+
+ * | Associativity | [12: 3] | [23: 3] | Associativity of cache - 1   |
+ * +---+-+-+--+
+ * | LineSize  | [ 2: 0] | [ 2: 0] | Log2(Words in cache line) - 2|
+ * +---+-+-+--+
  */
 #define CCSIDR_EL1_WRITE_THROUGH   BIT(31)
 #define CCSIDR_EL1_WRITE_BACK  BIT(30)
@@ -50,19 +58,32 @@ extern unsigned long __icache_flags;
 #define CCSIDR_EL1_WRITE_ALLOCATE  BIT(28)
 #define CCSIDR_EL1_LINESIZE_MASK   0x7
 #define CCSIDR_EL1_LINESIZE(x) ((x) & CCSIDR_EL1_LINESIZE_MASK)
-#define CCSIDR_EL1_ASSOCIATIVITY_SHIFT 3
-#define CCSIDR_EL1_ASSOCIATIVITY_MASK  0x3ff
-#define CCSIDR_EL1_ASSOCIATIVITY(x)\
-   (((x) >> CCSIDR_EL1_ASSOCIATIVITY_SHIFT) & 
CCSIDR_EL1_ASSOCIATIVITY_MASK)
-#define CCSIDR_EL1_NUMSETS_SHIFT   13
-#define CCSIDR_EL1_NUMSETS_MASK0x7fff
-#define CCSIDR_EL1_NUMSETS(x) \
-   (((x) >> CCSIDR_EL1_NUMSETS_SHIFT) & CCSIDR_EL1_NUMSETS_MASK)
-
-#define CACHE_LINESIZE(x)  (16 << CCSIDR_EL1_LINESIZE(x))
-#define CACHE_NUMSETS(x)   (CCSIDR_EL1_NUMSETS(x) + 1)
-#define CACHE_ASSOCIATIVITY(x) (CCSIDR_EL1_ASSOCIATIVITY(x) + 1)
-
+#define CCSIDR_EL1_V1_ASSOCIATIVITY_SHIFT  3
+#define CCSIDR_EL1_V1_ASSOCIATIVITY_MASK   0x3ff
+#define CCSIDR_EL1_V2_ASSOCIATIVITY_SHIFT  3
+#define CCSIDR_EL1_V2_ASSOCIATIVITY_MASK   0x1f
+#define CCSIDR_EL1_V1_NUMSETS_SHIFT13
+#define CCSIDR_EL1_V1_NUMSETS_MASK 0x7fff
+#define CCSIDR_EL1_V2_NUMSETS_SHIFT32
+#define CCSIDR_EL1_V2_NUMSETS_MASK 0xff
+
+#define CCSIDR_EL1_V1_ATTRIBUTE_MASK   0xf000
+#define CCSIDR_EL1_V2_ATTRIBUTE_MASK   0x0 /* Not supported */
+#define CCSIDR_EL1_ATTRIBUTES(v, x)((x) & CCSIDR_EL1_V##v##_ATTRIBUTE_MASK)
+#define CCSIDR_EL1_ASSOCIATIVITY(v, x) \
+   (((x) >> CCSIDR_EL1_V##v##_ASSOCIATIVITY_SHIFT) & 
CCSIDR_EL1_V##v##_ASSOCIATIVITY_MASK)
+#define CCSIDR_EL1_NUMSETS(v, x) \
+   (((x) >> CCSIDR_EL1_V##v##_NUMSETS_SHIFT) & 
CCSIDR_EL1_V##v##_NUMSETS_MASK)
+
+#define CACHE_LINESIZE(x)  (16 << CCSIDR_EL1_LINESIZE(x))
+#define CACHE_NUMSETS_V1(x)(CCSIDR_EL1_NUMSETS(1, x) + 1)
+#define CACHE_ASSOCIATIVITY_V1(x)  (CCSIDR_EL1_ASSOCIATIVITY(1, x) + 1)
+#define CACHE_ATTRIBUTES_V1(x) (CCSIDR_EL1_ATTRIBUTES(1, x))
+#define CACHE_NUMSETS_V2(x)(CCSIDR_EL1_NUMSETS(2, x) + 1)
+#define CACHE_ASSOCIATIVITY_V2(x)  (CCSIDR_EL1_ASSOCIATIVITY(2, x) + 1)
+#define CACHE_ATTRIBUTES_V2(x) (CCSIDR_EL1_ATTRIBUTES(2, x))
+
+extern int icache_get_numsets(void);
 extern u64 __attribute_const__ cache_get_ccsidr(u64 csselr);

 /* Helpers for Level 1 Instruction cache csselr = 1L */
@@ -71,11 +92,6 @@ static inline int icache_get_

[PATCH v2 2/3] arm64: kvm: reuse existing cache type/info related macros

2017-01-30 Thread Sudeep Holla
We already have various macros related to cache type and bitfields in
CLIDR system register. We can replace some of the hardcoded values
here using those existing macros.

This patch reuses those existing cache type/info related macros and
replaces the hardcorded values. It also removes some of the comments
that become trivial with the macro names.

Cc: Catalin Marinas <catalin.mari...@arm.com>
Cc: Will Deacon <will.dea...@arm.com>
Cc: Christoffer Dall <christoffer.d...@linaro.org>
Cc: Marc Zyngier <marc.zyng...@arm.com>
Signed-off-by: Sudeep Holla <sudeep.ho...@arm.com>
---
 arch/arm64/include/asm/cachetype.h |  7 +++
 arch/arm64/kernel/cacheinfo.c  |  7 ---
 arch/arm64/kvm/sys_regs.c  | 33 -
 3 files changed, 23 insertions(+), 24 deletions(-)

v1->v2:
- Fixed issue pointed by Christoffer(left shift operator)
- Replace couple of more hardcored values with macros

diff --git a/arch/arm64/include/asm/cachetype.h 
b/arch/arm64/include/asm/cachetype.h
index 180b3288aa3f..6cefbd39a40f 100644
--- a/arch/arm64/include/asm/cachetype.h
+++ b/arch/arm64/include/asm/cachetype.h
@@ -39,6 +39,13 @@

 extern unsigned long __icache_flags;

+#define MAX_CACHE_LEVEL7   /* Max 7 level 
supported */
+/* Ctypen, bits[3(n - 1) + 2 : 3(n - 1)], for n = 1 to 7 */
+#define CLIDR_CTYPE_SHIFT(level)   (3 * (level - 1))
+#define CLIDR_CTYPE_MASK(level)(7 << CLIDR_CTYPE_SHIFT(level))
+#define CLIDR_CTYPE(clidr, level)  \
+   (((clidr) & CLIDR_CTYPE_MASK(level)) >> CLIDR_CTYPE_SHIFT(level))
+
 /*
  * +---+-+-+--+
  * | Property  |   v1|v2   | Calculation  |
diff --git a/arch/arm64/kernel/cacheinfo.c b/arch/arm64/kernel/cacheinfo.c
index 888b38f1709f..fdb384f18906 100644
--- a/arch/arm64/kernel/cacheinfo.c
+++ b/arch/arm64/kernel/cacheinfo.c
@@ -26,13 +26,6 @@
 #include 
 #include 

-#define MAX_CACHE_LEVEL7   /* Max 7 level 
supported */
-/* Ctypen, bits[3(n - 1) + 2 : 3(n - 1)], for n = 1 to 7 */
-#define CLIDR_CTYPE_SHIFT(level)   (3 * (level - 1))
-#define CLIDR_CTYPE_MASK(level)(7 << CLIDR_CTYPE_SHIFT(level))
-#define CLIDR_CTYPE(clidr, level)  \
-   (((clidr) & CLIDR_CTYPE_MASK(level)) >> CLIDR_CTYPE_SHIFT(level))
-
 static inline enum cache_type get_cache_type(int level)
 {
u64 clidr;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 87e7e6608cd8..72656743b4cc 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -21,11 +21,13 @@
  */

 #include 
+#include 
 #include 
 #include 
 #include 

 #include 
+#include 
 #include 
 #include 
 #include 
@@ -59,7 +61,7 @@
 static u32 cache_levels;

 /* CSSELR values; used to index KVM_REG_ARM_DEMUX_ID_CCSIDR */
-#define CSSELR_MAX 12
+#define CSSELR_MAX ((MAX_CACHE_LEVEL - 1) << 1)

 /* Which cache CCSIDR represents depends on CSSELR value. */
 static u32 get_ccsidr(u32 csselr)
@@ -68,9 +70,7 @@ static u32 get_ccsidr(u32 csselr)

/* Make sure noone else changes CSSELR during this! */
local_irq_disable();
-   write_sysreg(csselr, csselr_el1);
-   isb();
-   ccsidr = read_sysreg(ccsidr_el1);
+   ccsidr = cache_get_ccsidr(csselr);
local_irq_enable();

return ccsidr;
@@ -1960,19 +1960,18 @@ static bool is_valid_cache(u32 val)
return false;

/* Bottom bit is Instruction or Data bit.  Next 3 bits are level. */
-   level = (val >> 1);
-   ctype = (cache_levels >> (level * 3)) & 7;
+   level = (val >> 1) + 1;
+   ctype = CLIDR_CTYPE(cache_levels, level);

switch (ctype) {
-   case 0: /* No cache */
-   return false;
-   case 1: /* Instruction cache only */
-   return (val & 1);
-   case 2: /* Data cache only */
-   case 4: /* Unified cache */
-   return !(val & 1);
-   case 3: /* Separate instruction and data caches */
+   case CACHE_TYPE_INST:
+   return (val & CACHE_TYPE_INST);
+   case CACHE_TYPE_DATA:
+   case CACHE_TYPE_UNIFIED:
+   return !(val & CACHE_TYPE_INST);
+   case CACHE_TYPE_SEPARATE:
return true;
+   case CACHE_TYPE_NOCACHE:
default: /* Reserved: we can't know instruction or data. */
return false;
}
@@ -2239,11 +2238,11 @@ void kvm_sys_reg_table_init(void)
 */
get_clidr_el1(NULL, ); /* Ugly... */
cache_levels = clidr.val;
-   for (i = 0; i < 7; i++)
-   if (((cache_levels >> (i*3)) & 7) == 0)
+   for (i = 1; i <= MAX_CACHE_LEVEL; i++)
+   if (CLIDR_CTYPE(cache_levels, i) == CACHE_TYPE_NOCACHE)
break;
   

Re: [PATCH 2/2] arm64: kvm: upgrade csselr and ccsidr to 64-bit values

2017-01-24 Thread Sudeep Holla


On 24/01/17 10:30, Christoffer Dall wrote:
> On Tue, Jan 24, 2017 at 10:15:38AM +0000, Sudeep Holla wrote:
>>
>>
>> On 23/01/17 21:08, Christoffer Dall wrote:
>>> On Fri, Jan 20, 2017 at 10:50:10AM +, Sudeep Holla wrote:
>>>> csselr and ccsidr are treated as 64-bit values already elsewhere in the
>>>> kernel. It also aligns well with the architecture extensions that allow
>>>> 64-bit format for ccsidr.
>>>>
>>>> This patch upgrades the existing accesses to csselr and ccsidr from
>>>> 32-bit to 64-bit in preparation to add support to those extensions.
>>>>
>>>> Cc: Christoffer Dall <christoffer.d...@linaro.org>
>>>> Cc: Marc Zyngier <marc.zyng...@arm.com>
>>>> Signed-off-by: Sudeep Holla <sudeep.ho...@arm.com>
>>>> ---
>>>>  arch/arm64/kvm/sys_regs.c | 18 +-
>>>>  1 file changed, 9 insertions(+), 9 deletions(-)
>>>>
>>>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>>>> index 5dca1f10340f..a3559a8a2b0c 100644
>>>> --- a/arch/arm64/kvm/sys_regs.c
>>>> +++ b/arch/arm64/kvm/sys_regs.c
>>
>> [..]
>>
>>>> @@ -2004,8 +2004,8 @@ static int demux_c15_get(u64 id, void __user *uaddr)
>>>>  
>>>>  static int demux_c15_set(u64 id, void __user *uaddr)
>>>>  {
>>>> -  u32 val, newval;
>>>> -  u32 __user *uval = uaddr;
>>>> +  u64 val, newval;
>>>> +  u64 __user *uval = uaddr;
>>>
>>> Doesn't converting these uval pointers to u64 cause us to break the ABI
>>> as we'll now be reading/writing 64-bit values to userspace with the
>>> get_user and put_user following the declarations?
>>>
>>
>> Yes, I too have similar concern. IIUC it is always read via kvm_one_reg
>> structure. I could not find any specific user for this register to cross
>> check.
>>
> 
> Not sure it matters which interface we get the userspace pointer from?
> 

Agreed.

> This patch is definitely changing the write from a 32-bit write to a
> 64-bit write and there's a specific check prior to the put_user() call
> which checks that userspace intended a 32-bit value and presumably
> provided a 32-bit pointer.
> 

I see you point, I missed to see that check(just to be sure KVM_REG_SIZE
check right ?).

> So I think the only way to return 64-bit AArch32 system register values
> to userspace (if that is the intention) is to define a new ID for 64-bit
> CCSIDR registers and handle them separately.
> 

I will add KVM_REG_ARM_DEMUX_ID_CCSIDR_64B or something similar.
Thanks for the review.

-- 
Regards,
Sudeep
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 2/2] arm64: kvm: upgrade csselr and ccsidr to 64-bit values

2017-01-24 Thread Sudeep Holla


On 23/01/17 21:08, Christoffer Dall wrote:
> On Fri, Jan 20, 2017 at 10:50:10AM +0000, Sudeep Holla wrote:
>> csselr and ccsidr are treated as 64-bit values already elsewhere in the
>> kernel. It also aligns well with the architecture extensions that allow
>> 64-bit format for ccsidr.
>>
>> This patch upgrades the existing accesses to csselr and ccsidr from
>> 32-bit to 64-bit in preparation to add support to those extensions.
>>
>> Cc: Christoffer Dall <christoffer.d...@linaro.org>
>> Cc: Marc Zyngier <marc.zyng...@arm.com>
>> Signed-off-by: Sudeep Holla <sudeep.ho...@arm.com>
>> ---
>>  arch/arm64/kvm/sys_regs.c | 18 +-
>>  1 file changed, 9 insertions(+), 9 deletions(-)
>>
>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> index 5dca1f10340f..a3559a8a2b0c 100644
>> --- a/arch/arm64/kvm/sys_regs.c
>> +++ b/arch/arm64/kvm/sys_regs.c

[..]

>> @@ -2004,8 +2004,8 @@ static int demux_c15_get(u64 id, void __user *uaddr)
>>  
>>  static int demux_c15_set(u64 id, void __user *uaddr)
>>  {
>> -u32 val, newval;
>> -u32 __user *uval = uaddr;
>> +u64 val, newval;
>> +u64 __user *uval = uaddr;
> 
> Doesn't converting these uval pointers to u64 cause us to break the ABI
> as we'll now be reading/writing 64-bit values to userspace with the
> get_user and put_user following the declarations?
> 

Yes, I too have similar concern. IIUC it is always read via kvm_one_reg
structure. I could not find any specific user for this register to cross
check.

-- 
Regards,
Sudeep
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 1/2] arm64: kvm: reuse existing cache type/info related macros

2017-01-24 Thread Sudeep Holla


On 23/01/17 21:05, Christoffer Dall wrote:
> On Fri, Jan 20, 2017 at 10:50:09AM +0000, Sudeep Holla wrote:
>> We already have various macros related to cache type and bitfields in
>> CLIDR system register. We can replace some of the hardcoded values
>> here using those existing macros.
>>
>> This patch reuses those existing cache type/info related macros and
>> replaces the hardcorded values. It also removes some of the comments
>> that become trivial with the macro names.
>>
>> Cc: Catalin Marinas <catalin.mari...@arm.com>
>> Cc: Will Deacon <will.dea...@arm.com>
>> Cc: Christoffer Dall <christoffer.d...@linaro.org>
>> Cc: Marc Zyngier <marc.zyng...@arm.com>
>> Signed-off-by: Sudeep Holla <sudeep.ho...@arm.com>
>> ---
>>  arch/arm64/include/asm/cachetype.h |  7 +++
>>  arch/arm64/kernel/cacheinfo.c  |  7 ---
>>  arch/arm64/kvm/sys_regs.c  | 27 +--
>>  3 files changed, 20 insertions(+), 21 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/cachetype.h 
>> b/arch/arm64/include/asm/cachetype.h
>> index f5588692f1d4..f58b5e3df6b8 100644
>> --- a/arch/arm64/include/asm/cachetype.h
>> +++ b/arch/arm64/include/asm/cachetype.h
>> @@ -39,6 +39,13 @@
>>  
>>  extern unsigned long __icache_flags;
>>  
>> +#define MAX_CACHE_LEVEL 7   /* Max 7 level 
>> supported */
>> +/* Ctypen, bits[3(n - 1) + 2 : 3(n - 1)], for n = 1 to 7 */
>> +#define CLIDR_CTYPE_SHIFT(level)(3 * (level - 1))
>> +#define CLIDR_CTYPE_MASK(level) (7 << CLIDR_CTYPE_SHIFT(level))
>> +#define CLIDR_CTYPE(clidr, level)   \
>> +(((clidr) & CLIDR_CTYPE_MASK(level)) >> CLIDR_CTYPE_SHIFT(level))
>> +
>>  /*
>>   * NumSets, bits[27:13] - (Number of sets in cache) - 1
>>   * Associativity, bits[12:3] - (Associativity of cache) - 1
>> diff --git a/arch/arm64/kernel/cacheinfo.c b/arch/arm64/kernel/cacheinfo.c
>> index 3f2250fc391b..a460208b08cf 100644
>> --- a/arch/arm64/kernel/cacheinfo.c
>> +++ b/arch/arm64/kernel/cacheinfo.c
>> @@ -26,13 +26,6 @@
>>  #include 
>>  #include 
>>  
>> -#define MAX_CACHE_LEVEL 7   /* Max 7 level 
>> supported */
>> -/* Ctypen, bits[3(n - 1) + 2 : 3(n - 1)], for n = 1 to 7 */
>> -#define CLIDR_CTYPE_SHIFT(level)(3 * (level - 1))
>> -#define CLIDR_CTYPE_MASK(level) (7 << CLIDR_CTYPE_SHIFT(level))
>> -#define CLIDR_CTYPE(clidr, level)   \
>> -(((clidr) & CLIDR_CTYPE_MASK(level)) >> CLIDR_CTYPE_SHIFT(level))
>> -
>>  static inline enum cache_type get_cache_type(int level)
>>  {
>>  u64 clidr;
>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>> index 87e7e6608cd8..5dca1f10340f 100644
>> --- a/arch/arm64/kvm/sys_regs.c
>> +++ b/arch/arm64/kvm/sys_regs.c
>> @@ -21,11 +21,13 @@
>>   */
>>  
>>  #include 
>> +#include 
>>  #include 
>>  #include 
>>  #include 
>>  
>>  #include 
>> +#include 
>>  #include 
>>  #include 
>>  #include 
>> @@ -59,7 +61,7 @@
>>  static u32 cache_levels;
>>  
>>  /* CSSELR values; used to index KVM_REG_ARM_DEMUX_ID_CCSIDR */
>> -#define CSSELR_MAX 12
>> +#define CSSELR_MAX  ((MAX_CACHE_LEVEL - 1) >> 1)
> 
> Did you mean '<< 1' here?
> 

Ah right, sorry for the stupid mistake.

-- 
Regards,
Sudeep
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH 2/2] arm64: kvm: upgrade csselr and ccsidr to 64-bit values

2017-01-20 Thread Sudeep Holla
csselr and ccsidr are treated as 64-bit values already elsewhere in the
kernel. It also aligns well with the architecture extensions that allow
64-bit format for ccsidr.

This patch upgrades the existing accesses to csselr and ccsidr from
32-bit to 64-bit in preparation to add support to those extensions.

Cc: Christoffer Dall <christoffer.d...@linaro.org>
Cc: Marc Zyngier <marc.zyng...@arm.com>
Signed-off-by: Sudeep Holla <sudeep.ho...@arm.com>
---
 arch/arm64/kvm/sys_regs.c | 18 +-
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 5dca1f10340f..a3559a8a2b0c 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -58,15 +58,15 @@
  */
 
 /* 3 bits per cache level, as per CLIDR, but non-existent caches always 0 */
-static u32 cache_levels;
+static u64 cache_levels;
 
 /* CSSELR values; used to index KVM_REG_ARM_DEMUX_ID_CCSIDR */
 #define CSSELR_MAX ((MAX_CACHE_LEVEL - 1) >> 1)
 
 /* Which cache CCSIDR represents depends on CSSELR value. */
-static u32 get_ccsidr(u32 csselr)
+static u64 get_ccsidr(u64 csselr)
 {
-   u32 ccsidr;
+   u64 ccsidr;
 
/* Make sure noone else changes CSSELR during this! */
local_irq_disable();
@@ -1952,9 +1952,9 @@ static int set_invariant_sys_reg(u64 id, void __user 
*uaddr)
return 0;
 }
 
-static bool is_valid_cache(u32 val)
+static bool is_valid_cache(u64 val)
 {
-   u32 level, ctype;
+   u64 level, ctype;
 
if (val >= CSSELR_MAX)
return false;
@@ -1979,8 +1979,8 @@ static bool is_valid_cache(u32 val)
 
 static int demux_c15_get(u64 id, void __user *uaddr)
 {
-   u32 val;
-   u32 __user *uval = uaddr;
+   u64 val;
+   u64 __user *uval = uaddr;
 
/* Fail if we have unknown bits set. */
if (id & ~(KVM_REG_ARCH_MASK|KVM_REG_SIZE_MASK|KVM_REG_ARM_COPROC_MASK
@@ -2004,8 +2004,8 @@ static int demux_c15_get(u64 id, void __user *uaddr)
 
 static int demux_c15_set(u64 id, void __user *uaddr)
 {
-   u32 val, newval;
-   u32 __user *uval = uaddr;
+   u64 val, newval;
+   u64 __user *uval = uaddr;
 
/* Fail if we have unknown bits set. */
if (id & ~(KVM_REG_ARCH_MASK|KVM_REG_SIZE_MASK|KVM_REG_ARM_COPROC_MASK
-- 
2.7.4

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH 1/2] arm64: kvm: reuse existing cache type/info related macros

2017-01-20 Thread Sudeep Holla
We already have various macros related to cache type and bitfields in
CLIDR system register. We can replace some of the hardcoded values
here using those existing macros.

This patch reuses those existing cache type/info related macros and
replaces the hardcorded values. It also removes some of the comments
that become trivial with the macro names.

Cc: Catalin Marinas <catalin.mari...@arm.com>
Cc: Will Deacon <will.dea...@arm.com>
Cc: Christoffer Dall <christoffer.d...@linaro.org>
Cc: Marc Zyngier <marc.zyng...@arm.com>
Signed-off-by: Sudeep Holla <sudeep.ho...@arm.com>
---
 arch/arm64/include/asm/cachetype.h |  7 +++
 arch/arm64/kernel/cacheinfo.c  |  7 ---
 arch/arm64/kvm/sys_regs.c  | 27 +--
 3 files changed, 20 insertions(+), 21 deletions(-)

diff --git a/arch/arm64/include/asm/cachetype.h 
b/arch/arm64/include/asm/cachetype.h
index f5588692f1d4..f58b5e3df6b8 100644
--- a/arch/arm64/include/asm/cachetype.h
+++ b/arch/arm64/include/asm/cachetype.h
@@ -39,6 +39,13 @@
 
 extern unsigned long __icache_flags;
 
+#define MAX_CACHE_LEVEL7   /* Max 7 level 
supported */
+/* Ctypen, bits[3(n - 1) + 2 : 3(n - 1)], for n = 1 to 7 */
+#define CLIDR_CTYPE_SHIFT(level)   (3 * (level - 1))
+#define CLIDR_CTYPE_MASK(level)(7 << CLIDR_CTYPE_SHIFT(level))
+#define CLIDR_CTYPE(clidr, level)  \
+   (((clidr) & CLIDR_CTYPE_MASK(level)) >> CLIDR_CTYPE_SHIFT(level))
+
 /*
  * NumSets, bits[27:13] - (Number of sets in cache) - 1
  * Associativity, bits[12:3] - (Associativity of cache) - 1
diff --git a/arch/arm64/kernel/cacheinfo.c b/arch/arm64/kernel/cacheinfo.c
index 3f2250fc391b..a460208b08cf 100644
--- a/arch/arm64/kernel/cacheinfo.c
+++ b/arch/arm64/kernel/cacheinfo.c
@@ -26,13 +26,6 @@
 #include 
 #include 
 
-#define MAX_CACHE_LEVEL7   /* Max 7 level 
supported */
-/* Ctypen, bits[3(n - 1) + 2 : 3(n - 1)], for n = 1 to 7 */
-#define CLIDR_CTYPE_SHIFT(level)   (3 * (level - 1))
-#define CLIDR_CTYPE_MASK(level)(7 << CLIDR_CTYPE_SHIFT(level))
-#define CLIDR_CTYPE(clidr, level)  \
-   (((clidr) & CLIDR_CTYPE_MASK(level)) >> CLIDR_CTYPE_SHIFT(level))
-
 static inline enum cache_type get_cache_type(int level)
 {
u64 clidr;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 87e7e6608cd8..5dca1f10340f 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -21,11 +21,13 @@
  */
 
 #include 
+#include 
 #include 
 #include 
 #include 
 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -59,7 +61,7 @@
 static u32 cache_levels;
 
 /* CSSELR values; used to index KVM_REG_ARM_DEMUX_ID_CCSIDR */
-#define CSSELR_MAX 12
+#define CSSELR_MAX ((MAX_CACHE_LEVEL - 1) >> 1)
 
 /* Which cache CCSIDR represents depends on CSSELR value. */
 static u32 get_ccsidr(u32 csselr)
@@ -68,9 +70,7 @@ static u32 get_ccsidr(u32 csselr)
 
/* Make sure noone else changes CSSELR during this! */
local_irq_disable();
-   write_sysreg(csselr, csselr_el1);
-   isb();
-   ccsidr = read_sysreg(ccsidr_el1);
+   ccsidr = cache_get_ccsidr(csselr);
local_irq_enable();
 
return ccsidr;
@@ -1960,19 +1960,18 @@ static bool is_valid_cache(u32 val)
return false;
 
/* Bottom bit is Instruction or Data bit.  Next 3 bits are level. */
-   level = (val >> 1);
-   ctype = (cache_levels >> (level * 3)) & 7;
+   level = (val >> 1) + 1;
+   ctype = CLIDR_CTYPE(cache_levels, level);
 
switch (ctype) {
-   case 0: /* No cache */
-   return false;
-   case 1: /* Instruction cache only */
-   return (val & 1);
-   case 2: /* Data cache only */
-   case 4: /* Unified cache */
-   return !(val & 1);
-   case 3: /* Separate instruction and data caches */
+   case CACHE_TYPE_INST:
+   return (val & CACHE_TYPE_INST);
+   case CACHE_TYPE_DATA:
+   case CACHE_TYPE_UNIFIED:
+   return !(val & CACHE_TYPE_INST);
+   case CACHE_TYPE_SEPARATE:
return true;
+   case CACHE_TYPE_NOCACHE:
default: /* Reserved: we can't know instruction or data. */
return false;
}
-- 
2.7.4

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH] ARM: dts: vexpress: Support GICC_DIR operations

2016-12-13 Thread Sudeep Holla


On 12/12/16 17:35, Marc Zyngier wrote:
> [+Sudeep]
> 
> On 10/12/16 20:13, Christoffer Dall wrote:
>> The GICv2 CPU interface registers span across 8K, not 4K as indicated in
>> the DT.  Only the GICC_DIR register is located after the initial 4K
>> boundary, leaving a functional system but without support for separately
>> EOI'ing and deactivating interrupts.
>>
>> After this change the system support split priority drop and interrupt
>> deactivation.
>>
>> Signed-off-by: Christoffer Dall 
>> ---
>>  arch/arm/boot/dts/vexpress-v2p-ca15_a7.dts | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/arch/arm/boot/dts/vexpress-v2p-ca15_a7.dts 
>> b/arch/arm/boot/dts/vexpress-v2p-ca15_a7.dts
>> index 0205c97..2e0cf39 100644
>> --- a/arch/arm/boot/dts/vexpress-v2p-ca15_a7.dts
>> +++ b/arch/arm/boot/dts/vexpress-v2p-ca15_a7.dts
>> @@ -126,7 +126,7 @@
>>  #address-cells = <0>;
>>  interrupt-controller;
>>  reg = <0 0x2c001000 0 0x1000>,
>> -  <0 0x2c002000 0 0x1000>,
>> +  <0 0x2c002000 0 0x2000>,
>><0 0x2c004000 0 0x2000>,
>><0 0x2c006000 0 0x2000>;
>>  interrupts = <1 9 0xf04>;
>>
> 
> Acked-by: Marc Zyngier 
> 

Thanks Marc, I see couple of other instances of this like tc2 and rtsm
model on arm64. Do they need to be fixed too ? I guess so. If so I will
fixup this to patch add tc1. And add another one for rtsm.

Also I see loads of gic-400 compatible dts(mainly rockchip and renasas)
having just 4k. Are they left like this intentionally ? I remember you
fixing most of the DTS when you found this issue initially.

-- 
Regards,
Sudeep
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH] arm64: KVM: fix build with CONFIG_ARM_PMU disabled

2016-06-08 Thread Sudeep Holla
When CONFIG_ARM_PMU is disabled, we get the following build error:

arch/arm64/kvm/sys_regs.c: In function 'pmu_counter_idx_valid':
arch/arm64/kvm/sys_regs.c:564:27: error: 'ARMV8_PMU_CYCLE_IDX' undeclared 
(first use in this function)
  if (idx >= val && idx != ARMV8_PMU_CYCLE_IDX)
   ^
arch/arm64/kvm/sys_regs.c:564:27: note: each undeclared identifier is reported 
only once for each function it appears in
arch/arm64/kvm/sys_regs.c: In function 'access_pmu_evcntr':
arch/arm64/kvm/sys_regs.c:592:10: error: 'ARMV8_PMU_CYCLE_IDX' undeclared 
(first use in this function)
idx = ARMV8_PMU_CYCLE_IDX;
  ^
arch/arm64/kvm/sys_regs.c: In function 'access_pmu_evtyper':
arch/arm64/kvm/sys_regs.c:638:14: error: 'ARMV8_PMU_CYCLE_IDX' undeclared 
(first use in this function)
   if (idx == ARMV8_PMU_CYCLE_IDX)
  ^
arch/arm64/kvm/hyp/switch.c:86:15: error: 'ARMV8_PMU_USERENR_MASK' undeclared 
(first use in this function)
  write_sysreg(ARMV8_PMU_USERENR_MASK, pmuserenr_el0);

This patch fixes the build with CONFIG_ARM_PMU disabled.

Cc: Christoffer Dall <christoffer.d...@linaro.org>
Cc: Marc Zyngier <marc.zyng...@arm.com>
Signed-off-by: Sudeep Holla <sudeep.ho...@arm.com>
---
 include/kvm/arm_pmu.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index fe389ac31489..92e7e97ca8ff 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -18,13 +18,13 @@
 #ifndef __ASM_ARM_KVM_PMU_H
 #define __ASM_ARM_KVM_PMU_H
 
-#ifdef CONFIG_KVM_ARM_PMU
-
 #include 
 #include 
 
 #define ARMV8_PMU_CYCLE_IDX(ARMV8_PMU_MAX_COUNTERS - 1)
 
+#ifdef CONFIG_KVM_ARM_PMU
+
 struct kvm_pmc {
u8 idx; /* index into the pmu->pmc array */
struct perf_event *perf_event;
-- 
2.7.4

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH] arm64: KVM: unregister notifiers in hyp mode teardown path

2016-04-04 Thread Sudeep Holla



On 04/04/16 14:55, Marc Zyngier wrote:

Hi Sudeep,

On 04/04/16 14:46, Sudeep Holla wrote:


[...]


@@ -1270,12 +1279,7 @@ static int init_hyp_mode(void)
free_boot_hyp_pgd();
  #endif

-   cpu_notifier_register_begin();
-
-   err = __register_cpu_notifier(_init_cpu_nb);
-
-   cpu_notifier_register_done();
-
+   err = register_cpu_notifier(_init_cpu_nb);


We went from something like this to the cpu_notifier_register_begin/end
with 8146875de ("arm, kvm: Fix CPU hotplug callback registration").

What makes it more acceptable now?



Correct, but in the initial code even init_hyp_mode was protected under
cpu_notifier_register_begin, but IIUC recent re-org eliminated the need
for that and the above code exactly resembles what register_cpu_notifier
does.

If that's not the case then we need to move cpu_notifier_register_begin
further up and retain __register_cpu_notifier

I mainly changed it to keep it consistent with unregister call.

--
Regards,
Sudeep
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH] arm64: KVM: unregister notifiers in hyp mode teardown path

2016-04-04 Thread Sudeep Holla
Commit 1e947bad0b63 ("arm64: KVM: Skip HYP setup when already running
in HYP") re-organized the hyp init code and ended up leaving the CPU
hotplug and PM notifier even if hyp mode initialization fails.

Since KVM is not yet supported with ACPI, the above mentioned commit
breaks CPU hotplug in ACPI boot.

This patch fixes teardown_hyp_mode to properly unregister both CPU
hotplug and PM notifiers in the teardown path.

Fixes: 1e947bad0b63 ("arm64: KVM: Skip HYP setup when already running in HYP")
Cc: Christoffer Dall <christoffer.d...@linaro.org>
Cc: Marc Zyngier <marc.zyng...@arm.com>
Signed-off-by: Sudeep Holla <sudeep.ho...@arm.com>
---
 arch/arm/kvm/arm.c | 16 ++--
 1 file changed, 10 insertions(+), 6 deletions(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 6accd66d26f0..42b3a1f83271 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -1101,10 +1101,17 @@ static void __init hyp_cpu_pm_init(void)
 {
cpu_pm_register_notifier(_init_cpu_pm_nb);
 }
+static void __init hyp_cpu_pm_exit(void)
+{
+   cpu_pm_unregister_notifier(_init_cpu_pm_nb);
+}
 #else
 static inline void hyp_cpu_pm_init(void)
 {
 }
+static inline void hyp_cpu_pm_exit(void)
+{
+}
 #endif
 
 static void teardown_common_resources(void)
@@ -1166,6 +1173,8 @@ static void teardown_hyp_mode(void)
free_hyp_pgds();
for_each_possible_cpu(cpu)
free_page(per_cpu(kvm_arm_hyp_stack_page, cpu));
+   unregister_cpu_notifier(_init_cpu_nb);
+   hyp_cpu_pm_exit();
 }
 
 static int init_vhe_mode(void)
@@ -1270,12 +1279,7 @@ static int init_hyp_mode(void)
free_boot_hyp_pgd();
 #endif
 
-   cpu_notifier_register_begin();
-
-   err = __register_cpu_notifier(_init_cpu_nb);
-
-   cpu_notifier_register_done();
-
+   err = register_cpu_notifier(_init_cpu_nb);
if (err) {
kvm_err("Cannot register HYP init CPU notifier (%d)\n", err);
goto out_err;
-- 
1.9.1

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm