Hi all,
reading
http://perl.apache.org/docs/1.0/api/Apache/SizeLimit.html#Shared_Memory_Opti
ons
i am seeing that link about memory sharing by copy-on-write points to
http://perl.apache.org/docs/1.0/guide/index.html
and
'META: change link when site is live' stands after it.
Site is alive, how
Anton Permyakov wrote:
reading
http://perl.apache.org/docs/1.0/api/Apache/SizeLimit.html#Shared_Memory_Opti
ons
i am seeing that link about memory sharing by copy-on-write points to
http://perl.apache.org/docs/1.0/guide/index.html
and
'META: change link when site is live' stands after it.
Site
Perrin Harkins wrote:
Anton Permyakov wrote:
reading
http://perl.apache.org/docs/1.0/api/Apache/SizeLimit.html#Shared_Memory_Opti
ons
i am seeing that link about memory sharing by copy-on-write points to
http://perl.apache.org/docs/1.0/guide/index.html
and
'META: change link when site is
Hello! :)
After moving to RedHat 7.3 with kernel 2.4.18-3smp
system can't use shared memory:
---
CPU0 states: 26.1% user, 13.0% system, 0.0% nice,
59.0% idle
CPU1 states: 24.0% user, 10.1% system, 0.0% nice,
64.0% idle
Mem: 1030724K av, 953088K used, 77636K free,
0K shrd, 27856K
by lexicals
- make sure that you have the most up-to-date (kernel) version of your
OS. Newer Linux kernels seem to be a lot savvy at handling shared memory
than older kernels.
Again, I wish you strength in fixing this problem...
Elizabeth Mattijsen
http://www.kwinternet.com/eric
(250) 655
Hi all,
On Sat, 16 Mar 2002, Bill Marrs wrote:
leads ones to wonder if some of our assumptions or tools used to
monitor memory are inaccurate or we're misinterpreting them.
Well 'top' on Linux is rubbish for sure.
73,
Ged.
that was swapped out to be swapped back in. It will
not fix those processes that have been sired after the shared memory
loss, as of Linux 2.2.15 and Solaris 2.6. (I have not checked since
then for behavior in this regard, nor have I checked on other OSes.)
Ed
On Thu, 14 Mar 2002, Bill Marrs wrote
of your
OS. Newer Linux kernels seem to be a lot savvy at handling shared memory
than older kernels.
Again, I wish you strength in fixing this problem...
Elizabeth Mattijsen
http://www.kwinternet.com/eric
(250) 655 - 9513 (PST Time Zone)
out to be swapped back in. It will
not fix those processes that have been sired after the shared memory
loss, as of Linux 2.2.15 and Solaris 2.6. (I have not checked since
then for behavior in this regard, nor have I checked on other OSes.)
Ed
On Thu, 14 Mar 2002, Bill Marrs wrote:
It's copy
, it returns an error.
The reason turning off swap works is because it forces the memory from
the parent process that was swapped out to be swapped back in. It will
not fix those processes that have been sired after the shared memory
loss, as of Linux 2.2.15 and Solaris 2.6. (I have
The reason turning off swap works is because it forces the memory from
the parent process that was swapped out to be swapped back in. It will
not fix those processes that have been sired after the shared memory
loss, as of Linux 2.2.15 and Solaris 2.6. (I have not checked since
It's copy-on-write. The swap is a write-to-disk.
There's no such thing as sharing memory between one process on disk(/swap)
and another in memory.
agreed. What's interesting is that if I turn swap off and back on again,
the sharing is restored! So, now I'm tempted to run a crontab every 30
On Thu, 14 Mar 2002 07:25:27 -0500, Bill Marrs [EMAIL PROTECTED] said:
It's copy-on-write. The swap is a write-to-disk.
There's no such thing as sharing memory between one process on disk(/swap)
and another in memory.
agreed. What's interesting is that if I turn swap off and back
Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: Thursday, March 14, 2002 8:24 AM
To: Bill Marrs
Cc: [EMAIL PROTECTED]
Subject: Re: loss of shared memory in parent httpd
On Thu, 14 Mar 2002 07:25:27 -0500, Bill Marrs [EMAIL PROTECTED]
said:
It's copy-on-write. The swap
the kernel deals with
swapping/sharing, so I can only speculate. I could imagine that it's
possible for it to do this, if the pages are marked properly, they could be
restored. But, I'll admit, it seems unlikely.
...and, I had this thought before. Maybe this apparent loss of shared
memory
On Thu, 14 Mar 2002, Bill Marrs wrote:
It's copy-on-write. The swap is a write-to-disk.
There's no such thing as sharing memory between one process on disk(/swap)
and another in memory.
agreed. What's interesting is that if I turn swap off and back on again,
what? doesn't seem to me
Bill Marrs wrote:
You actually can do this. See the mergemem project:
http://www.complang.tuwien.ac.at/ulrich/mergemem/
I'm interested in this, but it involves a kernel hack and the latest
version is from 29-Jan-1999, so I got cold feet.
It was a student project. And unless someone
I just wanted to mention that the theory that my loss of shared memory in
the parent is related to swapping seems to be correct.
When the lack of sharing occurs, it is correlated with my httpd processes
showing a SWAP (from top/ps) of 7.5MB, which is roughly equal to the amount
of sharing
of shared memory in parent httpd
On Tue, 12 Mar 2002, Graham TerMarsch wrote:
[...]
We saw something similar here, running on Linux servers. Turned out to be
that if the server swapped hard enough to swap an HTTPd out, then you
basically lost all the shared memory that you had. I can't
Stas Bekman wrote:
Bill Marrs wrote:
One more piece of advice: I find it easier to tune memory control
with a single parameter. Setting up a maximum size and a minumum
shared size is not as effective as setting up a maximum *UNSHARED*
size. After all, it's the amount of real memory
to smooth
operation. Until, it happens again.
Using GTop() to get the shared memory of each child before and after
running my perl for each page load showed that it wasn't my code causing
the jump, but suddenly the child, after having a good amount of shared
memory in use, loses a 10MB chunk
At 09:18 AM 3/12/02 -0500, Bill Marrs wrote:
If anyone has any ideas what might cause the httpd parent (and new
children) to lose a big chunk of shared memory between them, please let me
know.
I've seen this happen many times. One day it works fine, the next you're
in trouble. And in my
Oops. Premature sending...
I have two ideas that might help:
- reduce number of global variables used, less memory pollution by lexicals
- make sure that you have the most up-to-date (kernel) version of your
OS. Newer Linux kernels seem to be a lot savvy at handling shared memory
than older
usage and restores the server to
smooth operation. Until, it happens again.
Using GTop() to get the shared memory of each child before and after
running my perl for each page load showed that it wasn't my code causing
the jump, but suddenly the child, after having a good amount of shared
On Tue, 12 Mar 2002 09:18:32 -0500
Bill Marrs [EMAIL PROTECTED] wrote:
But... recently, something happened, and things have changed. After some
random amount of time (1 to 40 minutes or so, under load), the parent httpd
suddenly loses about 7-10mb of share between it and any new child it
Elizabeth Mattijsen wrote:
At 09:18 AM 3/12/02 -0500, Bill Marrs wrote:
If anyone has any ideas what might cause the httpd parent (and new
children) to lose a big chunk of shared memory between them, please
let me know.
I've seen this happen many times. One day it works fine
At 11:46 PM 3/12/02 +0800, Stas Bekman wrote:
I'm not sure whether my assessment of the problem is correct. I would
welcome any comments on this.
Nope Elizabeth, your explanation is not so correct. ;)
Too bad... ;-(
Shared memory is not about sharing the pre-allocated memory pool (heap
materials
which you may find helpful for understanding the shared memory concepts.
Ah... ok... can't wait for that either... ;-)
Don't you love mod_perl for what it makes you learn :)
Well, yes and no... ;-)
Elizabeth Mattijsen
Elizabeth Mattijsen wrote:
Since Perl is basically all data, you would need to find a way of
localizing all memory that is changing to as few memory chunks as
possible.
That certainly would help. However, I don't think you can do that in
any easy way. Perl doesn't try to keep compiled
here, running on Linux servers. Turned out to be
that if the server swapped hard enough to swap an HTTPd out, then you
basically lost all the shared memory that you had. I can't explain all of
the technical details and the kernel-ness of it all, but from watching our
own servers here
to an
array.
Over time, I always see the parent process lose some shared memory. My
advice is to base your tuning not on the way it looks right after you
start it, but on the way it looks after serving pages for a few hours.
Yes, you will underutilize the box just after a restart, but you
Thanks for all the great advice.
A number of you indicated that it's likely due to my apache processes being
partially swapped to disk. That seems likely to me. I haven't had a
chance to prove that point, but when it does it again and I'm around, I
plan to test it with free/top (top has a
On Tue, 12 Mar 2002, Graham TerMarsch wrote:
[...]
We saw something similar here, running on Linux servers. Turned out to be
that if the server swapped hard enough to swap an HTTPd out, then you
basically lost all the shared memory that you had. I can't explain all of
the technical
No, I can't explain the nitty gritty either. :-)
Someone should write up a summary of this thread and ask in a
technical linux place, or maybe ask Dean Gaudet.
I believe this is a linux/perl issue... stand alone daemons exhibit the
same behaviour... e.g. if you've got a parent PERL daemon
Bill Marrs wrote:
One more piece of advice: I find it easier to tune memory control with
a single parameter. Setting up a maximum size and a minumum shared
size is not as effective as setting up a maximum *UNSHARED* size.
After all, it's the amount of real memory being used by each
One of the shiny golden nuggets I received from said slice was a
shared memory cache. It was simple, it was elegant, it was
perfect. It was also based on IPC::Shareable. GREAT idea. BAD
juju.
Just use Cache::Cache. It's faster and easier.
Now, ya see...
Once upon a time, not many
The _session_id is used as the seed for the locking semaphore.
*IF* I understood the requirements correctly, the _session_id has
to be the same FOR EVERY PROCESS in order for the locking to work
as desired, for a given shared data structure.
Only if you want to lock the whole thing,
about
weather shared memory was as sensitive to locks/corruption as threading, and
B) I reviewed Apache::Session's lock code, but didn't review Cache::Cache's
(20/20 hindsight, ya know).
You're more than welcome to roll your own solution based on your
personal preferences, but I don't want people
On Tue, Sep 04, 2001 at 12:14:52PM -0700, Rob Bloodgood wrote:
***OH WOW!*** So, DURING the course of composing this message, I've
realized that the function expire_old_accounts() is now redundant!
Cache::Cache takes care of that, both with expires_in and max_size. I'm
leaving it in for
What about my IPC::FsSharevars? I've once mentioned it on this list,
but I don't have the time to read all list mail, so maybe I've missed
some conclusions following the discussion from last time.
I remember the post and went to find IPC::FsSharevars a while ago and was
un-intrigued when I
At 20:37 Uhr -0400 4.9.2001, Geoffrey Young wrote:
I remember the post and went to find IPC::FsSharevars a while ago and was
un-intrigued when I didn't find it on CPAN. has there been any feedback
from the normal perl module forums?
I haven't announced it on other forums (yet). (I think it's
Christian == Christian Jaeger [EMAIL PROTECTED] writes:
Christian I haven't announced it on other forums (yet). (I think it's
Christian more of a working version yet that needs feedback and some
Christian work to make it generally useable (i.e. under
Christian mod_perl). Which forum should I
Perrin == Perrin Harkins [EMAIL PROTECTED] writes:
Uhh... good point, except that I don't trust the Cache code. The
AUTHOR isn't ready to put his stamp of approval on the
locking/updating.
Perrin That sort of hesitancy is typical of CPAN. I wouldn't worry
Perrin about it. I think I
I don't think Cache::Cache has enough logic for an atomic
read-modify-write in any of its modes to implement (for example) a
web hit counter. It has only atomic write. The last write wins
strategy is fine for caching, but not for transacting, so I can see
why Rob is a bit puzzled.
In his
Quoting Joshua Chamas [EMAIL PROTECTED]:
Also, more a side note, I have found that you have to fully
restart apache, not just a graceful, if either the Oracle server
is restarted or the TNS listener is restarted.
We fixed this at eToys by having children that failed to connect to the
into shared memory? If so, how?
DBI-install_driver()
See http://perl.apache.org/guide/performance.html#Initializing_DBI_pm for more.
- Perrin
Hi,
I'm using Stas Bekman's excellent Apache::VMonitor module to help me
decrease my mod_perl child process memory usage. I was working on
preloading all of my perl modules and scripts in a startup.pl script when
I noticed that the amount of shared memory seemed very low. Immediately
after I
Bob Foster wrote:
Hi,
I'm using Stas Bekman's excellent Apache::VMonitor module to help me
decrease my mod_perl child process memory usage. I was working on
preloading all of my perl modules and scripts in a startup.pl script when
I noticed that the amount of shared memory seemed very
[EMAIL PROTECTED] writes:
Make sure to use DBD::Oracle in your startup.pl or
do PerlModule DBD::Oracle ... that should load up some
Oracle libs in the parent. Also, you *might* try
doing a connect or even an invalid connect to Oracle,
which might grab some extra libs that it only loads
at
Bob Foster wrote:
Thank you very much, Joshua. I have made some progress and am now seeing
15.8M shared out of 16.7M on the parent. I believe that the problem was
that I was doing a graceful restart which wasn't restarting the parent
process.
Now I have a different problem. When I
I understand the forking model of Apache, and what that means in terms of
data initialized in the start-up phase being ready-to-go in each child
process. But what I need to do is manage it so that a particular value is
shared between all children, such that changes made by one are recognized
by
such as a
database, file, or perhaps (assuming you're on a Unix system)
something like System V shared memory or semaphores.
One quick 'n cheap way to implement mutual exclusion between Unix
processes (executing on the same processor) is to use mkdir, which is
atomic (ie once a process requests a mkdir
)
SLsomething like System V shared memory or semaphores.
You can find more information on maintaining server-side state in the
mod_perl guide or from the mod_perl book (at perl.apache.org and
www.modperl.com, respectively).
SLOne quick 'n cheap way to implement mutual exclusion between Unix
SLprocesses
TCP services worked quite good, no
problem.
Because this server is in production, after not successfull try to
restart Informix (could not initialize shared memory), I just restarted
the whole server. Then I thought let's try some other connection type
for Informix and installed one more alias
]
X-Mailer: Mozilla 4.75 [en] (X11; U; Linux 2.2.14-5.0 i586)
X-Accept-Language: en
To: Sean Chittenden [EMAIL PROTECTED]
CC: [EMAIL PROTECTED]
Subject: Re: mod_perl shared memory with MM
Sean,
Yeah, I was thinking about something like that at first, but I've never played
with named
At 22:23 Uhr -0500 10.3.2001, DeWitt Clinton wrote:
On Sat, Mar 10, 2001 at 04:35:02PM -0800, Perrin Harkins wrote:
Christian Jaeger wrote:
Yes, it uses a separate file for each variable. This way also locking
is solved, each variable has it's own file lock.
You should take a look at
On Sun, Mar 11, 2001 at 03:33:12PM +0100, Christian Jaeger wrote:
I've looked at Cache::FileCache now and think it's (currently) not
possible to use for IPC::FsSharevars:
I really miss locking capabilities. Imagine a script that reads a
value at the beginning of a request and writes it
I'm very intrigued by your thinking on locking. I had never
considered the transaction based approach to caching you are referring
to. I'll take this up privately with you, because we've strayed far
off the mod_perl topic, although I find it fascinating.
One more suggestion before you take
DeWitt Clinton wrote:
On Sun, Mar 11, 2001 at 03:33:12PM +0100, Christian Jaeger wrote:
I've looked at Cache::FileCache now and think it's (currently) not
possible to use for IPC::FsSharevars:
I really miss locking capabilities. Imagine a script that reads a
value at the beginning
by a
BTree in shared memory. IPC::ShareLite only works for individual scalars.
It wouldn't surprise me if a file system approach was faster than either of
these on Linux, because of the agressive caching.
- Perrin
an actual hash interface backed by a
BTree in shared memory. IPC::ShareLite only works for individual scalars.
Not tried that one !
I'ce used the obvious Sharedlight plus Storable to serialise hashes.
It wouldn't surprise me if a file system approach was faster than either of
these on Linux
sophisticated locking
(even different variables from the same session can be written at the
same time).
Sounds very interesting. Does it use a multi-file approach like
File::Cache? Have you actually benchmarked it against BerkeleyDB? It's
hard to beat BDB because it uses a shared memory
hard to beat BDB because it uses a shared memory buffer, but theoretically
the file system buffer could do it since that's managed by the kernel.
Yes, it uses a separate file for each variable. This way also locking
is solved, each variable has it's own file lock.
It's a bit difficult to write
when doing a sync after every
write as is recommended in various documentation to make it
multiprocess safe. What do you mean with BerkeleyDB, something
different than DB_File?
BerkeleyDB.pm is an interface to later versions of the Berkeley DB
library. It has a shared memory cache, and does
On Sat, Mar 10, 2001 at 04:35:02PM -0800, Perrin Harkins wrote:
Christian Jaeger wrote:
Yes, it uses a separate file for each variable. This way also locking
is solved, each variable has it's own file lock.
You should take a look at DeWitt Clinton's Cache::FileCache module,
announced on
I have some preliminary benchmark code -- only good for relative
benchmarking, but it is a start. I'd be happy to post the results
here if people are interested.
Please do.
- Perrin
For all of you trying to share session information efficently my
IPC::FsSharevars module might be the right thing. I wrote it after
having considered all the other solutions. It uses the file system
directly (no BDB/etc. overhead) and provides sophisticated locking
(even different variables
Adi Fairbank wrote:
Yeah, I was thinking about something like that at first, but I've never played
with named pipes, and it didn't sound too safe after reading the perlipc man
page. What do you use, Perl open() calls, IPC::Open2/3, IPC::ChildSafe, or
IPC:ChildSafe is a good module, I use it
Sean Chittenden wrote:
Is there a way you can do that without using Storable?
Right after I sent the message, I was thinking to myself that same
question... If I extended IPC::MM, how could I get it to be any
faster than Storable already is?
You can also read in the data
Adi Fairbank wrote:
I am trying to squeeze more performance out of my persistent session cache. In
my application, the Storable image size of my sessions can grow upwards of
100-200K. It can take on the order of 200ms for Storable to deserialize and
serialize this on my (lousy) hardware.
Is there a way you can do that without using Storable?
Right after I sent the message, I was thinking to myself that same
question... If I extended IPC::MM, how could I get it to be any
faster than Storable already is?
You can also read in the data you want in a startup.pl file
; Linux 2.2.14-5.0 i586)
X-Accept-Language: en
To: Sean Chittenden [EMAIL PROTECTED]
Subject: Re: mod_perl shared memory with MM
It's ok, I do that a lot, too. Usually right after I click "Send" is when I
realize I forgot something or didn't think it through all the way. :)
Sean
Sean,
Yeah, I was thinking about something like that at first, but I've never played
with named pipes, and it didn't sound too safe after reading the perlipc man
page. What do you use, Perl open() calls, IPC::Open2/3, IPC::ChildSafe, or
something else? How stable has it been for you? I just
I am trying to squeeze more performance out of my persistent session cache. In
my application, the Storable image size of my sessions can grow upwards of
100-200K. It can take on the order of 200ms for Storable to deserialize and
serialize this on my (lousy) hardware.
I'm looking at RSE's MM
Adi Fairbank wrote:
I am trying to squeeze more performance out of my persistent session cache. In
my application, the Storable image size of my sessions can grow upwards of
100-200K. It can take on the order of 200ms for Storable to deserialize and
serialize this on my (lousy) hardware.
Perrin Harkins wrote:
Adi Fairbank wrote:
I am trying to squeeze more performance out of my persistent session cache. In
my application, the Storable image size of my sessions can grow upwards of
100-200K. It can take on the order of 200ms for Storable to deserialize and
serialize
Sam Horrocks wrote:
say they take two slices, and interpreters 1 and 2 get pre-empted and
go back into the queue. So then requests 5/6 in the queue have to use
other interpreters, and you expand the number of interpreters in use.
But still, you'll wind up using the smallest number of
There's only one run queue in the kernel. THe first task ready to run is
put
at the head of that queue, and anything arriving afterwards waits. Only
if that first task blocks on a resource or takes a very long time, or
a higher priority process becomes able to run due to an
uses to serialize requests:
fcntl(), flock(), Sys V semaphores, uslock (IRIX only) and Pthreads
(reliably only on Solaris). Do they _all_ result in LRU?
Remember that the httpd's in the speedycgi case will have very little
un-shared memory, because they don't have perl interpreters in them
There seems to be a lot of talk here, and analogies, and zero real-world
benchmarking.
Now it seems to me from reading this thread, that speedycgi would be
better where you run 1 script, or only a few scripts, and mod_perl might
win where you have a large application with hundreds of different
You know, I had brief look through some of the SpeedyCGI code yesterday,
and I think the MRU process selection might be a bit of a red herring.
I think the real reason Speedy won the memory test is the way it spawns
processes.
Please take a look at that code again. There's no smoke
On Fri, 19 Jan 2001, Sam Horrocks wrote:
You know, I had brief look through some of the SpeedyCGI code yesterday,
and I think the MRU process selection might be a bit of a red herring.
I think the real reason Speedy won the memory test is the way it spawns
processes.
Please take
ter than mod_perl withsc
ripts that contain un-shared memory
There's only one run queue in the kernel. THe first task ready to run is
put
at the head of that queue, and anything arriving afterwards waits. Only
if that first task blocks on a resource or takes a very long time, or
a higher priori
dycgi showed similar rates
with ab.
Even at higher levels (300), they were comparable.
That's what I would expect if both systems have a similar limit of how
many interpreters they can fit in RAM at once. Shared memory would help
here, since it would allow more
At
concurrency
level 100, both mod_perl and mod_speedycgi showed similar rates
with ab.
Even at higher levels (300), they were comparable.
That's what I would expect if both systems have a similar limit
of how
many interpreters they can fit in RAM at
I have a wide assortment of queries on a site, some of which take several minutes to
execute, while others execute in less than one second. If understand this analogy
correctly, I'd be better off with the current incarnation of mod_perl because there
would be more cashiers around to serve the
?
It is actually possible to benchmark. Given the same concurrent load
and the same number of httpds running, speedycgi will use fewer perl
interpreters than mod_perl. This will usually result in speedycgi
using less RAM, except under light loads, or if the amount of shared
memory is extremely large
There is no coffee. Only meals. No substitutions. :-)
If we added coffee to the menu it would still have to be prepared by the cook.
Remember that you only have one CPU, and all the perl interpreters large and
small must gain access to that CPU in order to run.
Sam
I have a wide
On Wed, 17 Jan 2001, Sam Horrocks wrote:
If in both the MRU/LRU case there were exactly 10 interpreters busy at
all times, then you're right it wouldn't matter. But don't confuse
the issues - 10 concurrent requests do *not* necessarily require 10
concurrent interpreters. The MRU has an
Hello Sam and others
If I haven't overseen, nobody so far really mentioned fastcgi. I'm
asking myself why you reinvented the wheel. I summarize the
differences I see:
+ perl scripts are more similar to standard CGI ones than with
FastCGI (downside: see next point)
- it seems you can't
they were comparable.
That's what I would expect if both systems have a similar limit of how
many interpreters they can fit in RAM at once. Shared memory would help
here, since it would allow more interpreters to run.
By the way, do you limit the number of SpeedyCGI processes as well? i
Les Mikesell wrote:
[cut]
I don't think I understand what you mean by LRU. When I view the
Apache server-status with ExtendedStatus On, it appears that
the backend server processes recycle themselves as soon as they
are free instead of cycling sequentially through all the available
Sam Horrocks wrote:
A few things:
- In your results, could you add the speedycgi version number (2.02),
and the fact that this is using the mod_speedycgi frontend.
The version numbers are gathered at runtime, so for mod_speedycgi,
this would get picked up if you registered it in
100, both mod_perl and mod_speedycgi showed similar rates with ab.
Even at higher levels (300), they were comparable.
That's what I would expect if both systems have a similar limit of how
many interpreters they can fit in RAM at once. Shared memory would help
here, since it would
is automatically
finding the sweet spot in terms of how many processes can run within the
space of one request and coming close to the ideal of never having
unused processes in memory. Now I'm really looking forward to getting
MRU and shared memory in the same package and seeing how high I can
scale my hardware
unused processes in memory. Now I'm really looking forward to getting
MRU and shared memory in the same package and seeing how high I can
scale my hardware.
- Perrin
--
www.RentZone.org
Buddy Lee Haystack wrote:
Does this mean that mod_perl's memory hunger will curbed in the future using some of
the neat tricks in Speedycgi?
Yes. The upcoming mod_perl 2 (running on Apache 2) will use MRU to
select threads. Doug demoed this at ApacheCon a few months back.
- Perrin
- Original Message -
From: "Sam Horrocks" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: "mod_perl list" [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Saturday, January 06, 2001 6:32 AM
Subject: Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts
th
Sam Horrocks wrote:
Don't agree. You're equating the model with the implemntation.
Unix processes model concurrency, but when it comes down to it, if you
don't have more CPU's than processes, you can only simulate concurrency.
Hey Sam, nice module. I just installed your SpeedyCGI for
Right, but this also points out how difficult it is to get mod_perl
tuning just right. My opinion is that the MRU design adapts more
dynamically to the load.
How would this compare to apache's process management when
using the front/back end approach?
Same thing applies.
1 - 100 of 138 matches
Mail list logo