On 17 Jun 2011, at 6:14 PM, Paul Querna wrote:

- Existing APIs in unix and windows really really suck at non blocking behaviour. Standard APR file handling couldn't do it, so we couldn't use it properly. DNS libraries are really terrible at it. The vast majority of "async" DNS libraries are just hidden threads which wrap attempts to make blocking calls, which in turn means unknown resource limits are hit when you least expect it. Database and LDAP calls are blocking. What this means
practically is that you can't link to most software out there.


Yes.

Don't use the existing APIs.

Use libuv for IO.

Use c-ares for DNS.

Don't use LDAP and Databases in the Event Loop;  Not all content
generation needs to be in the main event loop, but lots of content
generation and handling of clients should be.

This is where the premise falls down. You can't advertise yourself as a generally extensible webserver, and then tell everybody that the only libraries they are allowed to use come from a tiny exclusive list.

People who extend httpd will use whatever library is most convenient to them, and when their server becomes unstable, they will quite rightly blame httpd, they won't blame their own code. There has been no shortage of other projects learning this lesson over the last ten years.

You are confusing the 'core' network IO model with fault isolation.
The Worker MPM has actually been quite good on most platforms for the
last decade.   There is little reason to use prefork anymore.

In our experience, prefork is still the basis for our dynamic code servers. As a media organisation we experience massive thundering herds, and so fault isolation for us is a big deal. We certainly don't prefork exclusively, just where we need it, but it remains our required lowest common denominator.

With load balancers in front of httpd to handle massive concurrent connections, having massive concurrent connections in httpd isn't necessary for us. We pipe the requests from the load balancers down a modest number of parallel keepalive connections, keeping concurrent connections to a sane level. Modern multi core hardware is really good at ths sort of stuff.

Obviously, one size doesn't fit all, which is why we have mpms.

Yes, an event loop in the core will be an awesome thing to have, but we need the option to retain both prefork and worker behaviour, and it has to be designed very carefully so that we remain good at being reliable.

Should we run PHP inside the core event loop?  Hell no.

Will people who extend our code try to run PHP inside the event loop? Hell yes, and this is where the problem lies. We need to design our server around what our users will do. It's no use berating users afterwards for code they choose to write in good faith.

I think as Stefan aludes to, there is a reasonable middle ground where
network IO is done well in an Event loop, but we can still maintain
easy extendability, with some multi-process and multi-thread systems
for content generators that have their own needs, like file io.

But certain things in the core, like SSL, must be done right, and done
in an evented way.  It'll be hard, but we are programmers after all
aren't we?

I don't understand the problem with SSL - openssl supports asynchronous io, we just need to use it.

Regards,
Graham
--

Reply via email to