On 20.08.2013, at 14:57, Aneesh Kumar K.V wrote:
> Alexander Graf <[email protected]> writes:
>
>> On 19.08.2013, at 09:25, Aneesh Kumar K.V wrote:
>>
>>> Alexander Graf <[email protected]> writes:
>>>
>>>> On 11.08.2013, at 20:16, Aneesh Kumar K.V wrote:
>>>>
>>>>> From: "Aneesh Kumar K.V" <[email protected]>
>>>>>
>>>>> Without this, a value of rb=0 and rs=0, result in us replacing the 0th
>>>>> index
>>>>>
>>>>> Signed-off-by: Aneesh Kumar K.V <[email protected]>
>>>>
>>>> Wrong mailing list again ;).
>>>
>>> Will post the series again with updated commit message to the qemu list.
>>>
>>>>
>>>>> ---
>>>>> target-ppc/kvm.c | 14 ++++++++++++--
>>>>> 1 file changed, 12 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
>>>>> index 30a870e..5d4e613 100644
>>>>> --- a/target-ppc/kvm.c
>>>>> +++ b/target-ppc/kvm.c
>>>>> @@ -1034,8 +1034,18 @@ int kvm_arch_get_registers(CPUState *cs)
>>>>> /* Sync SLB */
>>>>> #ifdef TARGET_PPC64
>>>>> for (i = 0; i < 64; i++) {
>>>>> - ppc_store_slb(env, sregs.u.s.ppc64.slb[i].slbe,
>>>>> - sregs.u.s.ppc64.slb[i].slbv);
>>>>> + target_ulong rb = sregs.u.s.ppc64.slb[i].slbe;
>>>>> + /*
>>>>> + * KVM_GET_SREGS doesn't retun slb entry with slot
>>>>> information
>>>>> + * same as index. So don't depend on the slot information in
>>>>> + * the returned value.
>>>>
>>>> This is the generating code in book3s_pr.c:
>>>>
>>>> if (vcpu->arch.hflags & BOOK3S_HFLAG_SLB) {
>>>> for (i = 0; i < 64; i++) {
>>>> sregs->u.s.ppc64.slb[i].slbe =
>>>> vcpu->arch.slb[i].orige | i;
>>>> sregs->u.s.ppc64.slb[i].slbv =
>>>> vcpu->arch.slb[i].origv;
>>>> }
>>>>
>>>> Where exactly did you see broken slbe entries?
>>>>
>>>
>>> I noticed this when adding support for guest memory dumping via qemu gdb
>>> server. Now the array we get would look like below
>>>
>>> slbe0 slbv0
>>> slbe1 slbv1
>>> 0000 00000
>>> 0000 00000
>>
>> Ok, so that's where the problem lies. Why are the entries 0 here?
>> Either we try to fetch more entries than we should, we populate
>> entries incorrectly or the kernel simply returns invalid SLB entry
>> values for invalid entries.
>
>
> The ioctl zero out the sregs, and fill only slb_max entries. So we find
> 0 filled entries above slb_max. Also we don't pass slb_max to user
> space. So userspace have to look at all the 64 entries.
We do pass slb_max, it's just called differently and calculated implicitly :).
How about something like this:
diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
index 30a870e..29a2ec3 100644
--- a/target-ppc/kvm.c
+++ b/target-ppc/kvm.c
@@ -818,6 +818,8 @@ int kvm_arch_put_registers(CPUState *cs, int level)
/* Sync SLB */
#ifdef TARGET_PPC64
+ /* We need to loop through all entries to give them potentially
+ valid values */
for (i = 0; i < 64; i++) {
sregs.u.s.ppc64.slb[i].slbe = env->slb[i].esid;
sregs.u.s.ppc64.slb[i].slbv = env->slb[i].vsid;
@@ -1033,7 +1035,7 @@ int kvm_arch_get_registers(CPUState *cs)
/* Sync SLB */
#ifdef TARGET_PPC64
- for (i = 0; i < 64; i++) {
+ for (i = 0; i < env->slb_nr; i++) {
ppc_store_slb(env, sregs.u.s.ppc64.slb[i].slbe,
sregs.u.s.ppc64.slb[i].slbv);
}
>
>
>>
>> Are you seeing this with PR KVM or HV KVM?
>>
>
> HV KVM
If the above didn't help, could you please dig out how HV KVM assembles its SLB
information and just paste it here? :)
Alex
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html