Thanks for the links and clarifications, here is another one:
However, for really understanding what this is about this link that
you posted is very good: http://www.nightmare.com/medusa/medusa.html
>From the above Medusa link: "Most Internet servers are built on a
'forking' model.". It never states that this is in fact the case with
Apache but I get the feeling that this is in fact how Apache works, at
least the 1.x version which didn't do threading.
If this is the case then why is Apache the de facto standard? From the
Medusa page one gets the impression that forking is horribly
inefficient compared to the asynchronous way, but if this really was
the case an alternative to Apache based on this model would surely
have strong support. There is something missing here and I can't get
that missing piece. Please enlighten me.
Could this maybe be an explanation: "A high-load server thus needs to
have a lot of memory. Many popular Internet servers are running with
hundreds of megabytes of memory.". Hundreds of megabytes was maybe a
lot at time of writing but is nothing now. Maybe technological
advancements simply solved the whole problem on the http side?
If I can be convinced that I'm not solving a non-existent problem then
I would be happy to help you with this if you want help.
UNSUBSCRIBE: mailto:[EMAIL PROTECTED]