According to Matt Sergeant:
> Well I'll show by example. Take slash (the perl scripts for slashdot.org) -
> it's got a web front end and now available is an NNTP front end. Wouldn't
> it be nice to run both in-process under mod_perl, so you could easily
> communicate between the two, use the same logging code, use the same core
> modules, etc. That's what I'm thinking of.
If the common code is written as perl modules or shared C libraries
wrapped as perl modules, you can easily use the same routines
in different programs. There is no need to include them all
in places where they aren't needed.
> Besides that, with a mod_perl enabled generic server rather than an inetd
> server there's no loading config files for each request, no starting a
> process, and Apache 2.0 (and I'm assuming mod_perl) will be available as a
> threaded server, so it's only 1 10-20M process, not 100+.
Server start-up time is generally only relevant for protocols that
make a connection-per-request, and HTTP is about the only thing
that does that. Regardless, it is simple enough to make a dedicated
server listen on each port if you prefer.
Threads may help with the memory problem but I'm not convinced yet.
It has taken about 15 years to get the standard libraries mostly
thread-safe. I don't think it will happen instantly with perl.
Maybe with java, where they were designed in from the start...
Les Mikesell
[EMAIL PROTECTED]