>> The responses all come back successfully within a few seconds.
>> Can you give me a really general description of the sort of problem
>> that could behave like this?
>
> Your server is just a single computer, running multiple processes.
> Each request from a user (be it you or someone else) requires a
> certain amount of resources while it's executing. If there aren't
> enough resources, some of the requests will have to wait until enough
> others have finished in order for the resources to be freed up.

Here's where I'm confused.  The requests are made via a browser and
the response is displayed in the browser.  There is no additional
processing besides the display of the response.  The responses are
received and displayed within about 3 seconds of when the requests are
made.  Shouldn't this mean that all processing related to these
transactions is completed within 3 seconds?  If so, I don't understand
why apache2 seems to bog down a bit for about 10 minutes afterward.

- Grant


> To really simplify things, let's say your server has a single CPU
> core, the queries made against it only require CPU consumption, not
> disk consumption, and the queries you're making require 3s of CPU time.
>
> If you make a query, the server will spend 3s thinking before it spits
> a result back to you. During this time, it can't think about anything
> else...if it does, the server will take as much longer to respond to
> you as it takes thinking about other things.
>
> Let's say you make two queries at the same time. Each requires 3s of
> CPU time, so you'll need a grand total of 6s to get all your results
> back. That's fine, you're expecting this.
>
> Now let's say you make a query, and someone else makes a query. Each
> query takes 3s of CPU time. Since the server has 6s worth of work to
> do, all the users will get their responses by the end of that 6s.
> Depending on how a variety of factors come into play, user A might see
> his query come back at the end of 3s, and user B might see his query
> come back at the end of 6s. Or it might be reversed. Or both users
> might not see their results until the end of that 6s. It's really not
> very predictable.
>
> The more queries you make, the more work you give the server. If the
> server has to spend a few seconds' worth of resources, that's a few
> seconds' worth of resources unavailable to other users. A few seconds
> for a query against a web server is actually a huge amount of time...a
> well-tuned application on a well-tuned webserver backed by a
> well-tuned database should probably respond to the query in under
> 50ms! This is because there are often many, many users making queries,
> and each user tends to make many queries at the same time.
>
> There are several things you can do to improve the state of things.
> The first and foremost is to add caching in front of the server, using
> an accelerator proxy. (i.e. squid running in accelerator mode.) In
> this way, you have a program which receives the user's request, checks
> to see if it's a request that it already has a response for, checks
> whether that response is still valid, and then checks to see whether
> or not it's permitted to respond on the server's behalf...almost
> entirely without bothering the main web server. This process is far,
> far, far faster than having the request hit the serving application's
> main code.
>
> The second thing is to check the web server configuration itself. Does
> it have enough spare request handlers available? Does it have too
> many? If there's enough CPU and RAM left over to launch a few more
> request handlers when the server is under heavy load, it might be a
> good idea to allow it to do just that.
>
> The third thing to do is to tune the database itself. MySQL in
> particular ships with horrible default settings that typically limit
> its performance to far below the hardware you'd normally find it on.
> Tuning the database requires knowledge of how the database engine
> works. There's an entire profession dedicated to doing that right...
>
> The fourth thing to do is add caching to the application, using things
> like memcachedb. This may require modifying the application...though
> if the application has support already, then, well, great.
>
> If that's still not enough, there are more things you can do, but you
> should probably start considering throwing more hardware at the problem...

Reply via email to