On 12.08.2012, at 11:24, Avi Kivity wrote:

> On 08/09/2012 08:02 PM, Alexander Graf wrote:
>> 
>> 
>> On 09.08.2012, at 12:36, Avi Kivity <[email protected]> wrote:
>> 
>>> On 08/09/2012 01:34 PM, Takuya Yoshikawa wrote:
>>>> On Tue,  7 Aug 2012 12:57:13 +0200
>>>> Alexander Graf <[email protected]> wrote:
>>>> 
>>>>> +struct kvm_memory_slot *hva_to_memslot(struct kvm *kvm, hva_t hva)
>>>>> +{
>>>>> +    struct kvm_memslots *slots = kvm_memslots(kvm);
>>>>> +    struct kvm_memory_slot *memslot;
>>>>> +
>>>>> +    kvm_for_each_memslot(memslot, slots)
>>>>> +        if (hva >= memslot->userspace_addr &&
>>>>> +              hva < memslot->userspace_addr + memslot->npages)
>>>>> +            return memslot;
>>>>> +
>>>>> +    return NULL;
>>>>> +}
>>>> 
>>>> Can't we have two memory slots which contain that hva?
>>>> I thought that's why hva handler had to check all slots.
>>> 
>>> We can and do.  Good catch.
>>> 
>> 
>> Hrm. So I guess we can only do an hva_is_guest_memory() helper? That's all I 
>> really need anyways :)
>> 
> 
> How about kvm_for_each_memslot_hva_range()?  That can useful in
> kvm_handle_hva_range().  For your use case, you just do you stuff and
> return immediately.

Well, for now I just dropped the whole thing. In general, chances are pretty 
good that an HVA we get notified on with mmu notifiers is representing guest 
memory. And flushing a few times too often shouldn't hurt.


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to