On Thu, Apr 03, 2008 at 12:20:41PM -0700, Christoph Lameter wrote: > On Thu, 3 Apr 2008, Andrea Arcangeli wrote: > > > My attempt to fix this once and for all is to walk all vmas of the > > "mm" inside mmu_notifier_register and take all anon_vma locks and > > i_mmap_locks in virtual address order in a row. It's ok to take those > > inside the mmap_sem. Supposedly if anybody will ever take a double > > lock it'll do in order too. Then I can dump all the other locking and > > What about concurrent mmu_notifier registrations from two mm_structs > that have shared mappings? Isnt there a potential deadlock situation?
No, the ordering of the lock avoids that. Here a snippnet. /* * This operation locks against the VM for all pte/vma/mm related * operations that could ever happen on a certain mm. This includes * vmtruncate, try_to_unmap, and all page faults. The holder * must not hold any mm related lock. A single task can't take more * than one mm lock in a row or it would deadlock. */ So you can't do: mm_lock(mm1); mm_lock(mm2); But if two different tasks run the mm_lock everything is ok. Each task in the system can lock at most 1 mm at time. > Well good luck. Hopefully we will get to something that works. Looks good so far but I didn't finish it yet. ------------------------------------------------------------------------- Check out the new SourceForge.net Marketplace. It's the best place to buy or sell services for just about anything Open Source. http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace _______________________________________________ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel