On Thu, Nov 08, 2012 at 13:34, Ilya Bakulin wrote: > The problem seems to be in uvm_map_pageable_all() function > (sys/uvm/uvm_map.c). This function is a "special case of uvm_map_pageable", > which tries to mlockall() all mapped memory regions. > Prior to calling uvm_map_pageable_wire(), which actually does locking, it > tries to count how many memory bytes will be locked, and compares this > number > with uvmexp.wiredmax, which is set by RLIMIT_MEMLOCK. > The problem is that counting algorithm doesn't take into account that some > pages have VM_PROT_NONE flag set and hence won't be locked anyway. > Later in uvm_map_pageable_wire() these pages are skipped when doing actual > job.
I don't know if this is right. Should prot_none pages not be wired? I think the opposite should happen. prot_none pages should be locked as well. The app may be using prot_none as a way to protect its super secret secrets from itself. It certainly wouldn't want them being swapped out.
