>You could still have fast forking without overcommitting, you’d just pay
>the cost in unreachable RAM.
>
>If I have 4 GB of RAM in the system, and the kernel takes 1 GB of that, I
>start a 2.5 GB user space process, and my process forks itself with the
>intent of starting an 0.1 GB process, that fork would have to fail if
>overcommitting weren’t allowed.

No, it wouldn't, and there is no overcommitment.  You are creating a second 
process that is using the same V:R mapping as the original process thus it is 
consuming no more virtual memory after the fork operation than before (except 
for the bytes to track the new process).  You now have two execution paths 
through the same mapping which may require more real memory working set, but 
you have not increased the virtual memory size.  That is until one of the 
processes modifies a memory page in which case an additional virtual page must 
be allocated to hold the modified page.

Overcommittment occurs at the R level, not at the second V in the V:V:R mapping.

This is why shared libraries (and discontiguous saved segments) were invented.  
It permits the per-process mapping (the first V in V:V:R) to use already 
existing virtual pages (the second V in V:V:R) without increasing the count of 
Virtual Pages.  It is not overcommittment unless the number of virtual pages 
(the second V in V:V:R) exceeds the number of pages in R plus backing store.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume. 



_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to