On 10/04/2013 03:48 PM, Aneesh Kumar K.V wrote:
> Cédric Le Goater <[email protected]> writes:
>
>> MMIO emulation reads the last instruction executed by the guest
>> and then emulates. If the guest is running in Little Endian mode,
>> the instruction needs to be byte-swapped before being emulated.
>>
>> This patch stores the last instruction in the endian order of the
>> host, primarily doing a byte-swap if needed. The common code
>> which fetches last_inst uses a helper routine kvmppc_need_byteswap().
>> and the exit paths for the Book3S PV and HR guests use their own
>> version in assembly.
>>
>> kvmppc_emulate_instruction() also uses kvmppc_need_byteswap() to
>> define in which endian order the mmio needs to be done.
>>
>> The patch is based on Alex Graf's kvm-ppc-queue branch and it
>> has been tested on Big Endian and Little Endian HV guests and
>> Big Endian PR guests.
>>
>> Signed-off-by: Cédric Le Goater <[email protected]>
>> ---
>>
>> Here are some comments/questions :
>>
>> * the host is assumed to be running in Big Endian. when Little Endian
>> hosts are supported in the future, we will use the cpu features to
>> fix kvmppc_need_byteswap()
>>
>> * the 'is_bigendian' parameter of the routines kvmppc_handle_load()
>> and kvmppc_handle_store() seems redundant but the *BRX opcodes
>> make the improvements unclear. We could eventually rename the
>> parameter to byteswap and the attribute vcpu->arch.mmio_is_bigendian
>> to vcpu->arch.mmio_need_byteswap. Anyhow, the current naming sucks
>> and I would happy to have some directions to fix it.
>>
>>
>>
>> arch/powerpc/include/asm/kvm_book3s.h | 15 ++++++-
>> arch/powerpc/kvm/book3s_64_mmu_hv.c | 4 ++
>> arch/powerpc/kvm/book3s_hv_rmhandlers.S | 14 +++++-
>> arch/powerpc/kvm/book3s_segment.S | 14 +++++-
>> arch/powerpc/kvm/emulate.c | 71
>> +++++++++++++++++--------------
>> 5 files changed, 83 insertions(+), 35 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/kvm_book3s.h
>> b/arch/powerpc/include/asm/kvm_book3s.h
>> index 0ec00f4..36c5573 100644
>> --- a/arch/powerpc/include/asm/kvm_book3s.h
>> +++ b/arch/powerpc/include/asm/kvm_book3s.h
>> @@ -270,14 +270,22 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu
>> *vcpu)
>> return vcpu->arch.pc;
>> }
>>
>> +static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
>> +{
>> + return vcpu->arch.shared->msr & MSR_LE;
>> +}
>> +
>
> May be kvmppc_need_instbyteswap ?, because for data it also depend on
> SLE bit ? Don't also need to check the host platform endianness here ?
> ie, if host os __BIG_ENDIAN__ ?
I think we will wait for the host to become Little Endian before adding
more complexity.
>> static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
>> {
>> ulong pc = kvmppc_get_pc(vcpu);
>>
>> /* Load the instruction manually if it failed to do so in the
>> * exit path */
>> - if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED)
>> + if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED) {
>> kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
>> + if (kvmppc_need_byteswap(vcpu))
>> + vcpu->arch.last_inst = swab32(vcpu->arch.last_inst);
>> + }
>>
>> return vcpu->arch.last_inst;
>> }
>> @@ -293,8 +301,11 @@ static inline u32 kvmppc_get_last_sc(struct kvm_vcpu
>> *vcpu)
>>
>> /* Load the instruction manually if it failed to do so in the
>> * exit path */
>> - if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED)
>> + if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED) {
>> kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
>> + if (kvmppc_need_byteswap(vcpu))
>> + vcpu->arch.last_inst = swab32(vcpu->arch.last_inst);
>> + }
>>
>> return vcpu->arch.last_inst;
>> }
>> diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c
>> b/arch/powerpc/kvm/book3s_64_mmu_hv.c
>> index 3a89b85..28130c7 100644
>> --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
>> +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
>> @@ -547,6 +547,10 @@ static int kvmppc_hv_emulate_mmio(struct kvm_run *run,
>> struct kvm_vcpu *vcpu,
>> ret = kvmppc_ld(vcpu, &srr0, sizeof(u32), &last_inst, false);
>> if (ret != EMULATE_DONE || last_inst == KVM_INST_FETCH_FAILED)
>> return RESUME_GUEST;
>> +
>> + if (kvmppc_need_byteswap(vcpu))
>> + last_inst = swab32(last_inst);
>> +
>> vcpu->arch.last_inst = last_inst;
>> }
>>
>> diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>> b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>> index dd80953..1d3ee40 100644
>> --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>> +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>> @@ -1393,14 +1393,26 @@ fast_interrupt_c_return:
>> lwz r8, 0(r10)
>> mtmsrd r3
>>
>> + ld r0, VCPU_MSR(r9)
>> +
>> + /* r10 = vcpu->arch.msr & MSR_LE */
>> + rldicl r10, r0, 0, 63
>> + cmpdi r10, 0
>> + bne 2f
>> +
>> /* Store the result */
>> stw r8, VCPU_LAST_INST(r9)
>>
>> /* Unset guest mode. */
>> - li r0, KVM_GUEST_MODE_NONE
>> +1: li r0, KVM_GUEST_MODE_NONE
>> stb r0, HSTATE_IN_GUEST(r13)
>> b guest_exit_cont
>>
>> + /* Swap and store the result */
>> +2: addi r11, r9, VCPU_LAST_INST
>> + stwbrx r8, 0, r11
>> + b 1b
>> +
>> /*
>> * Similarly for an HISI, reflect it to the guest as an ISI unless
>> * it is an HPTE not found fault for a page that we have paged out.
>> diff --git a/arch/powerpc/kvm/book3s_segment.S
>> b/arch/powerpc/kvm/book3s_segment.S
>> index 1abe478..bf20b45 100644
>> --- a/arch/powerpc/kvm/book3s_segment.S
>> +++ b/arch/powerpc/kvm/book3s_segment.S
>> @@ -287,7 +287,19 @@ ld_last_inst:
>> sync
>>
>> #endif
>> - stw r0, SVCPU_LAST_INST(r13)
>> + ld r8, SVCPU_SHADOW_SRR1(r13)
>> +
>> + /* r10 = vcpu->arch.msr & MSR_LE */
>> + rldicl r10, r0, 0, 63
>
> that should be ?
> rldicl r10, r8, 0, 63
oups. Good catch.
Thanks for the review Aneesh.
C.
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html