On 31.03.2011, at 00:00, Scott Wood wrote:

> This is done lazily.  The SPE save will be done only if the guest has
> used SPE since the last preemption or heavyweight exit.  Restore will be
> done only on demand, when enabling MSR_SPE in the shadow MSR, in response
> to an SPE fault or mtmsr emulation.
> 
> For SPEFSCR, Linux already switches it on context switch (non-lazily), so
> the only remaining bit is to save it between qemu and the guest.
> 
> Signed-off-by: Liu Yu <[email protected]>
> Signed-off-by: Scott Wood <[email protected]>
> ---
> v4:
> - use shadow_msr rather than msr_block
> - restore guest SPE state only when shadow_msr[SPE] is set, not
>   when delivering an SPE exception to a guest with shared->msr[SPE]
>   clear.
> - use refactored SPE save/restore macros rather than duplicate

Very nice, it looks a lot cleaner than before and aligns better with book3s now.

> 
> arch/powerpc/include/asm/kvm_host.h  |    6 +++
> arch/powerpc/include/asm/reg_booke.h |    1 +
> arch/powerpc/kernel/asm-offsets.c    |    6 +++
> arch/powerpc/kvm/booke.c             |   70 +++++++++++++++++++++++++++++++++-
> arch/powerpc/kvm/booke.h             |   18 ++-------
> arch/powerpc/kvm/booke_interrupts.S  |   40 +++++++++++++++++++
> arch/powerpc/kvm/e500.c              |   19 +++++++++-
> 7 files changed, 143 insertions(+), 17 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/kvm_host.h 
> b/arch/powerpc/include/asm/kvm_host.h
> index 072ec7b..a3810ab 100644
> --- a/arch/powerpc/include/asm/kvm_host.h
> +++ b/arch/powerpc/include/asm/kvm_host.h
> @@ -195,6 +195,12 @@ struct kvm_vcpu_arch {
>       u64 fpr[32];
>       u64 fpscr;
> 
> +#ifdef CONFIG_SPE
> +     ulong evr[32];
> +     ulong spefscr;
> +     ulong host_spefscr;
> +     u64 acc;
> +#endif
> #ifdef CONFIG_ALTIVEC
>       vector128 vr[32];
>       vector128 vscr;
> diff --git a/arch/powerpc/include/asm/reg_booke.h 
> b/arch/powerpc/include/asm/reg_booke.h
> index 3b1a9b7..2705f9a 100644
> --- a/arch/powerpc/include/asm/reg_booke.h
> +++ b/arch/powerpc/include/asm/reg_booke.h
> @@ -312,6 +312,7 @@
> #define ESR_ILK               0x00100000      /* Instr. Cache Locking */
> #define ESR_PUO               0x00040000      /* Unimplemented Operation 
> exception */
> #define ESR_BO                0x00020000      /* Byte Ordering */
> +#define ESR_SPV              0x00000080      /* Signal Processing operation 
> */
> 
> /* Bit definitions related to the DBCR0. */
> #if defined(CONFIG_40x)
> diff --git a/arch/powerpc/kernel/asm-offsets.c 
> b/arch/powerpc/kernel/asm-offsets.c
> index 5120a63..4d39f2d 100644
> --- a/arch/powerpc/kernel/asm-offsets.c
> +++ b/arch/powerpc/kernel/asm-offsets.c
> @@ -494,6 +494,12 @@ int main(void)
>       DEFINE(TLBCAM_MAS3, offsetof(struct tlbcam, MAS3));
>       DEFINE(TLBCAM_MAS7, offsetof(struct tlbcam, MAS7));
> #endif
> +#ifdef CONFIG_SPE
> +     DEFINE(VCPU_EVR, offsetof(struct kvm_vcpu, arch.evr[0]));
> +     DEFINE(VCPU_ACC, offsetof(struct kvm_vcpu, arch.acc));
> +     DEFINE(VCPU_SPEFSCR, offsetof(struct kvm_vcpu, arch.spefscr));
> +     DEFINE(VCPU_HOST_SPEFSCR, offsetof(struct kvm_vcpu, arch.host_spefscr));
> +#endif /* CONFIG_SPE */
> 
> #ifdef CONFIG_KVM_EXIT_TIMING
>       DEFINE(VCPU_TIMING_EXIT_TBU, offsetof(struct kvm_vcpu,
> diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
> index 1204e1d..02a1aa9 100644
> --- a/arch/powerpc/kvm/booke.c
> +++ b/arch/powerpc/kvm/booke.c
> @@ -13,6 +13,7 @@
>  * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
>  *
>  * Copyright IBM Corp. 2007
> + * Copyright 2010-2011 Freescale Semiconductor, Inc.
>  *
>  * Authors: Hollis Blanchard <[email protected]>
>  *          Christian Ehrhardt <[email protected]>
> @@ -78,6 +79,43 @@ void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu)
>       }
> }
> 
> +#ifdef CONFIG_SPE
> +static void kvmppc_vcpu_enable_spe(struct kvm_vcpu *vcpu)
> +{
> +     enable_kernel_spe();
> +     kvmppc_load_guest_spe(vcpu);

Are you sure this is only ever called from !preempt code?


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to