On Tue, Jul 17, 2007 at 02:55:05PM -0700, Luck, Tony wrote:
> -                     tlb_finish_mmu(*tlbp, tlb_start, start);
> -
>                       if (need_resched() ||
>                               (i_mmap_lock && need_lockbreak(i_mmap_lock))) {
> -                             if (i_mmap_lock) {
> -                                     *tlbp = NULL;
> +                             if (i_mmap_lock)
>                                       goto out;
> 
> If we take this "goto out" path, then we'll miss out on calling
> the tlb_finish_mmu() which you deleted just above.

Look at the next hunk in the patch.  The old path set *tlbp to NULL
if we exit this function having called tlb_finish_mmu().  In that case,
we avoid calling tlb_finish_mmu() again.  Otherwise, *tlbp is left
pointing at the mmu_gather structure, and it's left for zap_page_range()
to call tlb_finish_mmu().

The new path actually cleans this up - we always exit unmap_vmas()s
_with_ the tlb context requiring tlb_finish_mmu(), so the call in
zap_page_range() becomes unconditional.

So, if anything, this is a much needed cleanup of the behaviour of
unmap_vmas().

> At the very
> least this will leave preemption disabled (since we'll miss calling
> the put_cpu_var(mmu_gathers)).
> 
> I think I'm also missing the big picture view of what you are
> doing here.

Avoiding calling tlb_finish_mmu() and tlb_gather_mmu() unnecessarily,
and (eg) thereby avoiding some repetitive entire TLB invalidations on
ARM.

-- 
Russell King
 Linux kernel    2.6 ARM Linux   - http://www.arm.linux.org.uk/
 maintainer of:
-
To unsubscribe from this list: send the line "unsubscribe linux-arch" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to