On 07.10.2013, at 16:23, Cedric Le Goater <c...@fr.ibm.com> wrote:

> Hi Alex,
> 
> On 10/04/2013 02:50 PM, Alexander Graf wrote:
>> 
>> On 03.10.2013, at 13:03, Cédric Le Goater wrote:
>> 
>>> MMIO emulation reads the last instruction executed by the guest
>>> and then emulates. If the guest is running in Little Endian mode,
>>> the instruction needs to be byte-swapped before being emulated.
>>> 
>>> This patch stores the last instruction in the endian order of the
>>> host, primarily doing a byte-swap if needed. The common code
>>> which fetches last_inst uses a helper routine kvmppc_need_byteswap().
>>> and the exit paths for the Book3S PV and HR guests use their own
>>> version in assembly.
>>> 
>>> kvmppc_emulate_instruction() also uses kvmppc_need_byteswap() to
>>> define in which endian order the mmio needs to be done.
>>> 
>>> The patch is based on Alex Graf's kvm-ppc-queue branch and it
>>> has been tested on Big Endian and Little Endian HV guests and
>>> Big Endian PR guests.
>>> 
>>> Signed-off-by: Cédric Le Goater <c...@fr.ibm.com>
>>> ---
>>> 
>>> Here are some comments/questions : 
>>> 
>>> * the host is assumed to be running in Big Endian. when Little Endian
>>>  hosts are supported in the future, we will use the cpu features to
>>>  fix kvmppc_need_byteswap()
>>> 
>>> * the 'is_bigendian' parameter of the routines kvmppc_handle_load()
>>>  and kvmppc_handle_store() seems redundant but the *BRX opcodes 
>>>  make the improvements unclear. We could eventually rename the
>>>  parameter to byteswap and the attribute vcpu->arch.mmio_is_bigendian
>>>  to vcpu->arch.mmio_need_byteswap. Anyhow, the current naming sucks
>>>  and I would happy to have some directions to fix it.
>>> 
>>> 
>>> 
>>> arch/powerpc/include/asm/kvm_book3s.h   |   15 ++++++-
>>> arch/powerpc/kvm/book3s_64_mmu_hv.c     |    4 ++
>>> arch/powerpc/kvm/book3s_hv_rmhandlers.S |   14 +++++-
>>> arch/powerpc/kvm/book3s_segment.S       |   14 +++++-
>>> arch/powerpc/kvm/emulate.c              |   71 
>>> +++++++++++++++++--------------
>>> 5 files changed, 83 insertions(+), 35 deletions(-)
>>> 
>>> diff --git a/arch/powerpc/include/asm/kvm_book3s.h 
>>> b/arch/powerpc/include/asm/kvm_book3s.h
>>> index 0ec00f4..36c5573 100644
>>> --- a/arch/powerpc/include/asm/kvm_book3s.h
>>> +++ b/arch/powerpc/include/asm/kvm_book3s.h
>>> @@ -270,14 +270,22 @@ static inline ulong kvmppc_get_pc(struct kvm_vcpu 
>>> *vcpu)
>>>     return vcpu->arch.pc;
>>> }
>>> 
>>> +static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
>>> +{
>>> +   return vcpu->arch.shared->msr & MSR_LE;
>>> +}
>>> +
>>> static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
>>> {
>>>     ulong pc = kvmppc_get_pc(vcpu);
>>> 
>>>     /* Load the instruction manually if it failed to do so in the
>>>      * exit path */
>>> -   if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED)
>>> +   if (vcpu->arch.last_inst == KVM_INST_FETCH_FAILED) {
>>>             kvmppc_ld(vcpu, &pc, sizeof(u32), &vcpu->arch.last_inst, false);
>>> +           if (kvmppc_need_byteswap(vcpu))
>>> +                   vcpu->arch.last_inst = swab32(vcpu->arch.last_inst);
>> 
>> Could you please introduce a new helper to load 32bit numbers? Something 
>> like kvmppc_ldl or kvmppc_ld32. That'll be easier to read here then :).
> 
> ok. I did something in that spirit in the next patchset I am about to send. I 
> will
> respin if needed but there is one fuzzy area though : kvmppc_read_inst().  
> 
> It calls kvmppc_get_last_inst() and then again kvmppc_ld(). Is that actually 
> useful ? 

We can only assume that the contents of vcpu->arch.last_inst is valid (which is 
what kvmppc_get_last_inst relies on) when we hit one of these interrupts:

        /* We only load the last instruction when it's safe */
        cmpwi   r12, BOOK3S_INTERRUPT_DATA_STORAGE
        beq     ld_last_inst
        cmpwi   r12, BOOK3S_INTERRUPT_PROGRAM
        beq     ld_last_inst
        cmpwi   r12, BOOK3S_INTERRUPT_SYSCALL
        beq     ld_last_prev_inst
        cmpwi   r12, BOOK3S_INTERRUPT_ALIGNMENT
        beq-    ld_last_inst
#ifdef CONFIG_PPC64
BEGIN_FTR_SECTION
        cmpwi   r12, BOOK3S_INTERRUPT_H_EMUL_ASSIST
        beq-    ld_last_inst
END_FTR_SECTION_IFSET(CPU_FTR_HVMODE)
#endif

        b       no_ld_last_inst

Outside of these interrupt handlers, we have to ensure that we manually load 
the instruction and if that fails, inject an interrupt into the guest to 
indicate that we couldn't load it.

I have to admit that the code flow is slightly confusing here. If you have 
suggestions how to improve it, I'm more than happy to see patches :).


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to