On 07/29/2010 04:04 PM, Alexander Graf wrote:
On Book3s_32 the tlbie instruction flushed effective addresses by the mask
0x0ffff000. This is pretty hard to reflect with a hash that hashes ~0xfff, so
to speed up that target we should also keep a special hash around for it.


  static inline u64 kvmppc_mmu_hash_vpte(u64 vpage)
  {
        return hash_64(vpage&  0xfffffffffULL, HPTEG_HASH_BITS_VPTE);
@@ -66,6 +72,11 @@ void kvmppc_mmu_hpte_cache_map(struct kvm_vcpu *vcpu, struct 
hpte_cache *pte)
        index = kvmppc_mmu_hash_pte(pte->pte.eaddr);
        hlist_add_head_rcu(&pte->list_pte,&vcpu->arch.hpte_hash_pte[index]);

+       /* Add to ePTE_long list */
+       index = kvmppc_mmu_hash_pte_long(pte->pte.eaddr);
+       hlist_add_head_rcu(&pte->list_pte_long,
+                       &vcpu->arch.hpte_hash_pte_long[index]);
+

Isn't it better to make operations on this list conditional on Book3s_32? Hashes are expensive since they usually cost cache misses.

Can of course be done later as an optimization.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to