On 05.11.2013, at 13:28, Cedric Le Goater <[email protected]> wrote:
> On 11/04/2013 12:44 PM, Alexander Graf wrote:
>>
>> On 10.10.2013, at 12:16, Paul Mackerras <[email protected]> wrote:
>>
>>> On Wed, Oct 09, 2013 at 10:29:53AM +0200, Alexander Graf wrote:
>>>>
>>>>
>>>> Am 09.10.2013 um 07:59 schrieb Paul Mackerras <[email protected]>:
>>>>
>>>>> On Wed, Oct 09, 2013 at 01:46:29AM +0200, Alexander Graf wrote:
>>>>>>
>>>>>>
>>>>>> Am 09.10.2013 um 01:31 schrieb Paul Mackerras <[email protected]>:
>>>>>>
>>>>>>> True, until we get to POWER8 with its split little-endian support,
>>>>>>> where instructions and data can have different endianness...
>>>>>>
>>>>>> How exactly does that work?
>>>>>
>>>>> They added an extra MSR bit called SLE which enables the split-endian
>>>>> mode. It's bit 5 (IBM numbering). For backwards compatibility, the
>>>>> LE bit controls instruction endianness, and data endianness depends on
>>>>> LE ^ SLE, that is, with SLE = 0 things work as before. With SLE=1 and
>>>>> LE=0 you get little-endian data and big-endian instructions, and vice
>>>>> versa with SLE=1 and LE=1.
>>>>
>>>> So ld32 should only honor LE and get_last_inst only looks at SLE and swaps
>>>> even the vcpu cached version if it's set, no?
>>>
>>> Not exactly; instruction endianness depends only on MSR[LE], so
>>> get_last_inst should not look at MSR[SLE]. I would think the vcpu
>>> cached version should be host endian always.
>>
>> I agree. It makes the code flow easier.
>
>
> To take into account the host endian order to determine if we should
> byteswap, we could modify kvmppc_need_byteswap() as follow :
>
>
> +/*
> + * Compare endian order of host and guest to determine whether we need
> + * to byteswap or not
> + */
> static inline bool kvmppc_need_byteswap(struct kvm_vcpu *vcpu)
> {
> - return vcpu->arch.shared->msr & MSR_LE;
> + return ((mfmsr() & (MSR_LE)) >> MSR_LE_LG) ^
mfmsr() is slow. Just use #ifdef __LITTLE_ENDIAN__.
> + ((vcpu->arch.shared->msr & (MSR_LE)) >> MSR_LE_LG);
> }
>
>
>
> and I think MSR[SLE] could be handled this way :
>
>
> static inline bool kvmppc_is_bigendian(struct kvm_vcpu *vcpu)
> @@ -284,10 +289,19 @@ static inline int kvmppc_ld32(struct kvm_vcpu *vcpu,
> ulong *eaddr,
> u32 *ptr, bool data)
> {
> int ret;
> + bool byteswap;
>
> ret = kvmppc_ld(vcpu, eaddr, sizeof(u32), ptr, data);
>
> - if (kvmppc_need_byteswap(vcpu))
> + byteswap = kvmppc_need_byteswap(vcpu);
> +
> + /* if we are loading data from a guest which is in Split
> + * Little Endian mode, the byte order is reversed
Only for data. Instructions are still non-reverse. You express this well in the
code, but not in the comment.
> + */
> + if (data && (vcpu->arch.shared->msr & MSR_SLE))
> + byteswap = !byteswap;
> +
> + if (byteswap)
> *ptr = swab32(*ptr);
>
> return ret;
>
>
> How does that look ?
>
> This is not tested and the MSR_SLE definition is missing. I will fix that in
> v5.
Alrighty :)
Alex
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html