On Fri, Jun 06, 2008 at 07:37:48PM +0300, Avi Kivity wrote: > For the gfn->hva (the common case) we can break immediately. For the > hva->gfn, in general we cannot, but we can add an "unaliased" flag to the
I was thinking at hva->gfn which is what mmu notififer does. But fn_to_memslot is called much more frequently and is more performance critical and for it, sorting the list by size is enough. > memslot and set it for slots which do not have aliases. That makes the > loop terminate soon again. So we need to make sure that aliases (gart-like) aren't common, or if we've an alias on ram we go back to scanning the whole list all the time. > Any pointer-based data structure is bound to be much slower than a list > with such a small number of elements. Tree can only be slower than a list if there is the bitflag to signal there is no alias so the list will break the loop always at the first step. If you remove that bitflag, tree lookup can't be slower than walking the entire list, even if there are only 3/4 elements queued. Only the no_alias bitflag allows the list to be faster. > btw, on 64-bit userspace we can arrange the entire physical address space > in a contiguous region (some of it unmapped) and have just one slot for the > guest. I thought mmio regions would need to be separated for 64bit too? I mean what's the point of the memslots in the first place if there's only one for the whole physical address space? > Okay. It's sad, but I don't see any choice. > > If anyone from Intel is listening, please give us an accessed bit (and a > dirty bit, too). Seconded. The other thing we could do would be to mark the spte invalid, and return 1, and then at the second ->clear_test_young if the spte is still invalid, we return 0. That way we would limit the accessed bit refresh to a kvm page fault without tearing down the linux pte (so follow_page would be enough then). While if we return 0, if the linux pte is already old, the page will be unmapped and go in swapcache and follow_page won't be enough and get_user_pages will have to call do_swap_page minor fault. However to do the above, we would need to track with rmap non present sptes, and that'd require changes to the kvm rmap logic. so initially returning 0 is simpler. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
