Chris Albertson wrote:

Under Linux (and other OSes) It's not as bad as that.  Even with
128 Perl processes running there is only one copy of the Perl
interpeter in memory.  Each of the 128 running processes would
have it's own copy of only it's data segments.  With Perl
already in memory the biggest system overhead would be
process creation.

The best design is the one that minimizes the number of
process that the kernel has to create.  Notice that this is
why the Apache Perl modual is so much faster than using
Perl from a CGI script

You will get the best usage of shared pages if all child interpreted processes fork off of one parent process. That way they can share as many data pages as possible also.

If they don't fork off of each other, then a new copy of the interpreter will be put into memory. There will be some shared CoW pages between them, but not nearly as many when compared forking off of a common pool of processes.
_______________________________________________
--Bandwidth and Colocation provided by Easynews.com --

Asterisk-Users mailing list
To UNSUBSCRIBE or update options visit:
  http://lists.digium.com/mailman/listinfo/asterisk-users

Reply via email to