Hi Andrea,
On Thu, Jul 03, 2008 at 05:17:42PM +0200, Andrea Arcangeli wrote:
> +static int kvm_unmap_rmapp(struct kvm *kvm, unsigned long *rmapp)
> +{
> + u64 *spte;
> + int need_tlb_flush = 0;
> +
> + while ((spte = rmap_next(kvm, rmapp, NULL))) {
> + BUG_ON(!(*spte & PT_PRESENT_MASK));
> + rmap_printk("kvm_rmap_unmap_hva: spte %p %llx\n", spte, *spte);
> + rmap_remove(kvm, spte);
There's a locking issue with rmap_remove. Other than concurrent memslot
changes which your "read-only browsing" patch covers, there's concurrent
alias changes.
The mmu_shrink path is similarly vulnerable at the moment (see
http://article.gmane.org/gmane.comp.emulators.kvm.devel/19102), we
should change memslots and aliases to be protected by mmu_lock.
> + set_shadow_pte(spte, shadow_trap_nonpresent_pte);
> + need_tlb_flush = 1;
> + }
> + return need_tlb_flush;
> +}
Still need to nuke large page mappings on invalidate_range, right?
Presently not an issue for invalidate_page.
Other than that looks much simpler, thanks.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html