> I've also found it useful to increase the value of MEMORY_CREATION_SIZE
> in the ElectricFence source. Setting this to larger than the amount
> of address space ever used by the program seems to avoid the
> vm.max_proc_mmap limit; maybe when ElectricFence calls mprotect()
> to divide up its allocated address space, each part of the split
> region is counted as a separate mmap.
Basically, since there is initially one vm_map entry per mmap. Each vm_map
entry represents a piece of virtually contiguous piece of memory where
each page is treated the same way. Hence, if I have a vm_map entry that
reference pages A, B, and C, and I mprotect B, the VM system splits that into
three vm_map entries. So another, more likely, alternative is that GTK2.0 is
doing more malloc() and free() calls than GTK1.2.
The check happens right at the end of mmap:
/*
* Do not allow more then a certain number of vm_map_entry structures
* per process. Scale with the number of rforks sharing the map
* to make the limit reasonable for threads.
*/
if (max_proc_mmap &&
vms->vm_map.nentries >= max_proc_mmap * vms->vm_refcnt) {
error = ENOMEM;
goto done;
}
error = vm_mmap(&vms->vm_map, &addr, size, prot, maxprot,
flags, handle, pos);
>
> I came across this before while debugging perl-Tk, and one other
> issue was that the program ran fantastically slowly; a trivial
> script that normally starts in a fraction of a second was taking
> close to an hour to get there on quite fast hardware. You expect
> ElectricFence to make things slow, but not quite that slow :-)
If you have a heavily fragmented address space you could, in a pathological
case, end up with almost a page per vm_map entry. Considering the common case is
3 or 4 vm_map entries per process and yeah, it is going to mind-numbingly slow
:-(. It would be interesting if you could dump the statistics on process
vmspaces.
-Kip
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message