On 12 Apr 2011, at 13:51, Jorge Gonzalez wrote:

Rather, it needs to load all the stuff and _then_ fork, so that the stuff is identical and shared.


You are right in this case: the pages would be shared just after the fork, but would probably start to get copied individually for each process again as soon as the process starts doing something useful. For perl, which works as some kind of JIT compiler, the script executable code is just data and probably gets rewritten very often, so each process would end with its own set of pages.

Not quite.

What happens is that the perl code is all compiled, and will stay static for the duration of the program.

Ergo loading a load of stuff, then forking, _will_ give you some memory sharing.

However, the difficulty is that you don't get to in any way assign which parts of the memory get perl code (which doesn't change), and which get variables (which do change). And the operating system's granularity of 4k on memory pages, means that if you change any 1 byte within each 4k page, then that page will get unshared.

Ergo even though most of a memory page may contain static code, it'll be unlucky enough to become unshared if a few bytes are allocated to variables...

So memory sharing starts out high if you pre-load everything, but falls off over time..

Cheers
t0m


_______________________________________________
List: [email protected]
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/[email protected]/
Dev site: http://dev.catalyst.perl.org/

Reply via email to