-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 02/08/2013 09:39 PM, Grant wrote:
>>>> A little more infromation would help. like what webserver,
>>>> what kind of requests, etc
>>>> 
>>>> -Kevin
>>> 
>>> It's apache and the requests/responses are XML.  I know this is
>>>  pathetically little information with which to diagnose the 
>>> problem. I'm just wondering if there is a tool or method
>>> that's good to diagnose things of this nature.
>> 
>> The problems are server-side, not necessarily client-side. Your 
>> optimizations are going to need to be performed there.
> 
> Are you saying the problem may lie with the server to which I was 
> making the request?

Yes.

> The responses all come back successfully within a few seconds.
> Can you give me a really general description of the sort of problem
> that could behave like this?

Your server is just a single computer, running multiple processes.
Each request from a user (be it you or someone else) requires a
certain amount of resources while it's executing. If there aren't
enough resources, some of the requests will have to wait until enough
others have finished in order for the resources to be freed up.

To really simplify things, let's say your server has a single CPU
core, the queries made against it only require CPU consumption, not
disk consumption, and the queries you're making require 3s of CPU time.

If you make a query, the server will spend 3s thinking before it spits
a result back to you. During this time, it can't think about anything
else...if it does, the server will take as much longer to respond to
you as it takes thinking about other things.

Let's say you make two queries at the same time. Each requires 3s of
CPU time, so you'll need a grand total of 6s to get all your results
back. That's fine, you're expecting this.

Now let's say you make a query, and someone else makes a query. Each
query takes 3s of CPU time. Since the server has 6s worth of work to
do, all the users will get their responses by the end of that 6s.
Depending on how a variety of factors come into play, user A might see
his query come back at the end of 3s, and user B might see his query
come back at the end of 6s. Or it might be reversed. Or both users
might not see their results until the end of that 6s. It's really not
very predictable.

The more queries you make, the more work you give the server. If the
server has to spend a few seconds' worth of resources, that's a few
seconds' worth of resources unavailable to other users. A few seconds
for a query against a web server is actually a huge amount of time...a
well-tuned application on a well-tuned webserver backed by a
well-tuned database should probably respond to the query in under
50ms! This is because there are often many, many users making queries,
and each user tends to make many queries at the same time.

There are several things you can do to improve the state of things.
The first and foremost is to add caching in front of the server, using
an accelerator proxy. (i.e. squid running in accelerator mode.) In
this way, you have a program which receives the user's request, checks
to see if it's a request that it already has a response for, checks
whether that response is still valid, and then checks to see whether
or not it's permitted to respond on the server's behalf...almost
entirely without bothering the main web server. This process is far,
far, far faster than having the request hit the serving application's
main code.

The second thing is to check the web server configuration itself. Does
it have enough spare request handlers available? Does it have too
many? If there's enough CPU and RAM left over to launch a few more
request handlers when the server is under heavy load, it might be a
good idea to allow it to do just that.

The third thing to do is to tune the database itself. MySQL in
particular ships with horrible default settings that typically limit
its performance to far below the hardware you'd normally find it on.
Tuning the database requires knowledge of how the database engine
works. There's an entire profession dedicated to doing that right...

The fourth thing to do is add caching to the application, using things
like memcachedb. This may require modifying the application...though
if the application has support already, then, well, great.

If that's still not enough, there are more things you can do, but you
should probably start considering throwing more hardware at the problem...
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.19 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJRFbtnAAoJED5TcEBdxYwQNiAH/18rSripzwl6DjK/lePRl9GI
LjOqarZ5XmW7lhfWwLajQfbfYXCcA6iEmrlxRZwIm039zIuvcuAIC1dLW64IYeyR
OMWppXTDo4dqpOYusPIcOVFvBECJGdU59ONOf2iHR5qUTwi2+Dip1DY5nFZLQjvD
zuDE418npqzm2ENaFpGM5SWAs7r/CvE4TiRWaZ2wZrHZrf36cXeT2miK/SFm33ZI
9rCqo8MKj8tw36i3M0lu9JvTTWPgbAJ43AKDxyYsEa3DZzbiBS9GK5pHl0XClVQK
by6uhmlxcdldcddu8vqPoLv45gfS2EYO3Oc0rZ9pAVOq5kJUlsmzSEq3NWcymEA=
=vSDC
-----END PGP SIGNATURE-----

Reply via email to