On 08/07/2012 04:52 PM, Alexander Graf wrote:
>>>
>>> +/************* MMU Notifiers *************/
>>> +
>>> +int kvm_unmap_hva(struct kvm *kvm, unsigned long hva)
>>> +{
>>> + /* Is this a guest page? */
>>> + if (!hva_to_memslot(kvm, hva))
>>> + return 0;
>>> +
>>> + /*
>>> + * Flush all shadow tlb entries everywhere. This is slow, but
>>> + * we are 100% sure that we catch the to be unmapped page
>>> + */
>>> + kvm_flush_remote_tlbs(kvm);
>>
>> Wow.
>
> Yeah, cool, eh? It sounds worse than it is. Usually when we need to page out,
> we're under memory pressure. So we would get called multiple times to unmap
> different pages. If we just drop all shadow tlb entries, we also freed a lot
> of memory that can now be paged out without callbacks.
And it's just a shadow tlb yes? So there's a limited amount of stuff
there. But it'd be hell on x86.
>
>>
>>> +
>>> + return 0;
>>> +}
>>> +
>>
>> Where do you drop the reference count when installing a page in a shadow
>> tlb entry?
>
> Which reference count? Essentially the remote tlb flush calls
> kvmppc_e500_prov_release() on all currently mapped shadow tlb entries. Are we
> missing out on something more?
>
With mmu notifiers mapped pages are kept without elevated reference
counts; the mmu notifier protects them, not the refcount. This allows
core mm code that looks at refcounts to work.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html