On Mon, 2014-04-28 at 15:05 -0700, Hugh Dickins wrote: > On Mon, 28 Apr 2014, Linus Torvalds wrote: > > On Mon, Apr 28, 2014 at 2:20 PM, Linus Torvalds > > <[email protected]> wrote: > > > > > > That said, the bug does seem to be that some path doesn't invalidate > > > the vmacache sufficiently, or something inserts a vmacache entry into > > > the current process when looking up a remote process or whatever. > > > Davidlohr, ideas? > > > > Maybe we missed some use_mm() call. That will change the current mm > > without flushing the vma cache. The code considers kernel threads to > > be bad targets for vma caching for this reason (and perhaps others), > > but maybe we missed something. > > > > I wonder if we should just invalidate the vma cache in use_mm(), and > > remote the "kernel tasks are special" check. > > > > Srivatsa, are you doing something peculiar on that system that would > > trigger this? I see some kdump failures in the log, anything else? > > I doubt that the vmacache has anything to do with the real problem > (though it *might* suggest that vmacache is less robust than what > it replaced - maybe). The log is so full of userspace SIGSEGVs > and General Protection faults, it looks like userspace was utterly > broken by some kernel bug messing up the address space.
I think that returning some stale/bogus vma is causing those segfaults in udev. It shouldn't occur in a normal scenario. What puzzles me is that it's not always reproducible. This makes me wonder what else is going on... -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

