This makes it consistent with h_enter where we clear the key
bits. We also want to use virtual page class key protection mechanism
for indicating host page fault. For that we will be using key class
index 30 and 31. So prevent the guest from updating key bits until
we add proper support for virtual page class protection mechanism for
the guest. This will not have any impact for PAPR linux guest because
Linux guest currently don't use virtual page class key protection model

Signed-off-by: Aneesh Kumar K.V <aneesh.ku...@linux.vnet.ibm.com>
---
 arch/powerpc/kvm/book3s_hv_rm_mmu.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c 
b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
index 157a5f35edfa..f908845f7379 100644
--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
@@ -658,13 +658,17 @@ long kvmppc_h_protect(struct kvm_vcpu *vcpu, unsigned 
long flags,
        }
 
        v = pte;
+       /*
+        * We ignore key bits here. We use class 31 and 30 for
+        * hypervisor purpose. We still don't track the page
+        * class seperately. Until then don't allow h_protect
+        * to change key bits.
+        */
        bits = (flags << 55) & HPTE_R_PP0;
-       bits |= (flags << 48) & HPTE_R_KEY_HI;
-       bits |= flags & (HPTE_R_PP | HPTE_R_N | HPTE_R_KEY_LO);
+       bits |= flags & (HPTE_R_PP | HPTE_R_N);
 
        /* Update guest view of 2nd HPTE dword */
-       mask = HPTE_R_PP0 | HPTE_R_PP | HPTE_R_N |
-               HPTE_R_KEY_HI | HPTE_R_KEY_LO;
+       mask = HPTE_R_PP0 | HPTE_R_PP | HPTE_R_N;
        rev = real_vmalloc_addr(&kvm->arch.revmap[pte_index]);
        if (rev) {
                r = (rev->guest_rpte & ~mask) | bits;
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to