Re: [gentoo-user] {OT} LWP::UserAgent slows website

2013-02-16 Thread Grant
 Now that's a new (and important!) piece of information. Your server
 runs slow for 10 *minutes* after your script has made its request?

 To me, that indicates that important data wound up getting swapped to
 disk on the server, and the slow behavior reported by other users is
 the result of that data being swapped back in on-demand.

 That also indicates that your script's requests (and, possibly,
 request pattern) cause some process in the server to allocate far more
 memory than usual, which is why the server is swapping things to disk.

OK, thank you for the explanation.  That does make sense.

 I agree. My rule of thumb was always that I must prevent Apache swapping
 at all costs as the performance impact is horrific.

 It doesn't have to mean installing more RAM (which is quick, easy, cheap
 and often rather effective), sensible optimizations can work wonders
 too, as can nginx as a proxy in front of Apache.

I've been using net-mail/up-imapproxy but the initscript has issues.
Is nginx good for IMAP too?

- Grant



Re: [gentoo-user] {OT} LWP::UserAgent slows website

2013-02-11 Thread Michael Mol
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/10/2013 12:05 AM, Grant wrote:
 The responses all come back successfully within a few seconds.
  Can you give me a really general description of the sort of 
 problem that could behave like this?
 
 Your server is just a single computer, running multiple
 processes. Each request from a user (be it you or someone else)
 requires a certain amount of resources while it's executing. If
 there aren't enough resources, some of the requests will have to
 wait until enough others have finished in order for the resources
 to be freed up.
 
 Here's where I'm confused.  The requests are made via a browser and
  the response is displayed in the browser.  There is no additional
  processing besides the display of the response.

You're running a client-side script that causes the *server* to do work.
The more work the server has to do, the slower it will perform for both
serving up your requests and those of other users. This is completely
independent of the work the client has to do.


 The responses are received and displayed within about 3 seconds of
  when the requests are made.  Shouldn't this mean that all
 processing related to these transactions is completed within 3
 seconds?

There's client-side processing in handling the server's response, but
there's also server-side processing in handling the client's request.
What Stroller called a wall of text was a crash course in how a server
can have too many things to do in a short amount of time, and some of
the side-effects you can see--like having two nominally-3s queries both
appear to take 6s, from the client's perspective.

 If so, I don't understand why apache2 seems to bog down a bit for 
 about 10 minutes afterward.

Now that's a new (and important!) piece of information. Your server
runs slow for 10 *minutes* after your script has made its request?

To me, that indicates that important data wound up getting swapped to
disk on the server, and the slow behavior reported by other users is
the result of that data being swapped back in on-demand.

That also indicates that your script's requests (and, possibly,
request pattern) cause some process in the server to allocate far more
memory than usual, which is why the server is swapping things to disk.

Why, exactly, the server is consuming so much memory depends on a lot
of factors.
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.19 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJRGS3FAAoJED5TcEBdxYwQs7oH/3Xy0d85bNJ2QtQ4YcTF7g9E
TPZbwAUrwxrYf828AMlCUMOww2d1wr0DQOm4lBrzOb/93C8iwGKTjtu1dBaOspdC
mEdVmkFXF8YUB8yA3SiSgteHNCDrN27UyJQNP7mOK8PXwri4BYyxTUEJ0UyZXc4F
oIoSweHQg7tmyKN7Rudd69axREJ9yIpKt4lw7JZWHhX25hTVxWYF1zRDxNNC1vJ+
kQWSE3ZcP8EdotmcpARPF7N4leHOyU1+Rw9XatLVbb2W23Fza/4+Mqeam9fbObgH
w1mdMCzIUxlUL91nU7Zc0zzb2qYS7Te1N7bOcFr1fXowcCBagUdzOKjEcshket0=
=sWpw
-END PGP SIGNATURE-



Re: [gentoo-user] {OT} LWP::UserAgent slows website

2013-02-11 Thread Michael Mol
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/10/2013 08:53 PM, Stroller wrote:
 
 On 10 February 2013, at 05:05, Grant wrote:
 ... Your server is just a single computer, running multiple
 processes. Each request from a user (be it you or someone else)
 requires a certain amount of resources while it's executing. If
 there aren't enough resources, some of the requests will have
 to wait until enough others have finished in order for the
 resources to be freed up.
 
 Here's where I'm confused.  …   The responses are received and
 displayed within about 3 seconds of when the requests are made.
 … , I don't understand why apache2 seems to bog down a bit for
 about 10 minutes afterward.
 
 Seriously, after finishing Mr Mol's wall-of-text (learn to snip,
 Grant!) I wondered if he'd even read your question!
 
 Stroller.
 
 

I've been using online communications for twenty years...and nobody
tempts me to create my first killfile like you do.
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.19 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJRGS4wAAoJED5TcEBdxYwQZEsIAI7eJacq8rIMP87EIVGvGrt+
z2xYvNohVovAI9b4sIwddL5spf4GLdVvvzjJNQqQb4e9wNgu08qPYCJCFNceSvE3
Hs/LNworkwFwFnNMK7jNfMaCp/GETFLMoaG/6A/jniKd1N/b/S5XBYfEqStbaaO8
vfqXCY6uem8p9zLig31eWDLzkIwanarp0LCUbZvDJbxaPpP6r9uRFVBBP/2IuvpS
u+XUEqYoeBBlzVo3wFqAUJMaSP5hLt6fEYXvId2VVcLwUfg653KwFgAXseYHDEci
vM39FeYUzwHevp7G7A1SYdKi0QmcIdfn2Pv96ZedSnjx/T0TglLJe3Y9DoY0x4c=
=TpVE
-END PGP SIGNATURE-



Re: [gentoo-user] {OT} LWP::UserAgent slows website

2013-02-11 Thread Alan McKinnon
On 11/02/2013 19:43, Michael Mol wrote:
 Now that's a new (and important!) piece of information. Your server
 runs slow for 10 *minutes* after your script has made its request?
 
 To me, that indicates that important data wound up getting swapped to
 disk on the server, and the slow behavior reported by other users is
 the result of that data being swapped back in on-demand.
 
 That also indicates that your script's requests (and, possibly,
 request pattern) cause some process in the server to allocate far more
 memory than usual, which is why the server is swapping things to disk.


I agree. My rule of thumb was always that I must prevent Apache swapping
at all costs as the performance impact is horrific.

It doesn't have to mean installing more RAM (which is quick, easy, cheap
and often rather effective), sensible optimizations can work wonders
too, as can nginx as a proxy in front of Apache.



-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] {OT} LWP::UserAgent slows website

2013-02-11 Thread Stroller

On 11 February 2013, at 17:43, Michael Mol wrote:
 ...
 If so, I don't understand why apache2 seems to bog down a bit for 
 about 10 minutes afterward.
 
 Now that's a new (and important!) piece of information. Your server
 runs slow for 10 *minutes* after your script has made its request?

This information is not new - it was in Grant's first post in this thread, 
hence the reason I wondered if you'd read it.

I am sorry if I have caused you offence on any other occasion - if so, please 
feel free to explain why. 

Stroller.


Re: [gentoo-user] {OT} LWP::UserAgent slows website

2013-02-11 Thread Michael Mol
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/11/2013 06:07 PM, Stroller wrote:
 
 On 11 February 2013, at 17:43, Michael Mol wrote:
 ...
 If so, I don't understand why apache2 seems to bog down a bit
 for about 10 minutes afterward.
 
 Now that's a new (and important!) piece of information. Your 
 server runs slow for 10 *minutes* after your script has made its 
 request?
 
 This information is not new - it was in Grant's first post in this 
 thread, hence the reason I wondered if you'd read it.

*goes back in the thread*

Indeed it is, and I missed it. Whoops. I assembled my understanding of
the problem from subsequent posts, rather than the initial one.

 
 I am sorry if I have caused you offence on any other occasion - if 
 so, please feel free to explain why.

Primarily, what bothers me is your typically acerbic tone, and that
your posts often (at least to my perception) carry more pejorative
than useful information. I greatly appreciate your more conciliatory
tone here!
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.19 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJRGYckAAoJED5TcEBdxYwQ4u0IAKniOy6z8N890fi0YJPE96af
BOTI8jMZ/C2Qdgbg67vHb1yXR7LW+7RYk889PKLDkd3KYIG3KP2Zf1AN9bugjxEv
hiNHLLUSQhdjbuoDw1EVQCt8r1m7XbQdSRTAlVGWTf7H+MlPGR06JJRtQxCxOuIY
QChGpqeQEClR84D8Ml+bg3gkybYAratm2AY+mKv2GbVXydEu6guCN/1uje73F1dJ
fQO6/zQr285YrRYWGrRDM4xMosqEgubL0QDMJYHOaPtvvUE5M4wulelx41jYrD0D
wtGDxq0X01qDRRYWzs5tyDGgICYSp/YvxYs6SOCx6Asd4e0UwOW00RELkMB3bMA=
=40Yp
-END PGP SIGNATURE-



Re: [gentoo-user] {OT} LWP::UserAgent slows website

2013-02-11 Thread Peter Humphrey
On Tuesday 12 February 2013 00:04:52 Michael Mol wrote:

 Primarily, what bothers me is your typically acerbic tone, and that
 your posts often (at least to my perception) carry more pejorative
 than useful information.

I've not noticed that, for what it's worth.

-- 
Peter



Re: [gentoo-user] {OT} LWP::UserAgent slows website

2013-02-11 Thread Stroller

On 12 February 2013, at 00:04, Michael Mol wrote:
 I am sorry if I have caused you offence on any other occasion - if 
 so, please feel free to explain why.
 
 Primarily, what bothers me is your typically acerbic tone, and that
 your posts often (at least to my perception) carry more pejorative
 than useful information.

I have always attempted the very opposite.

I'm a little shocked, and will attempt to reassess with fresh eyes before 
posting in the future.

I can only hope you may have confused me with someone else.

I will occasionally make a terse response to a problem, asking no more than 
have you checked X? what does /var/log/Y say? please post the output of 
`exec-Z`. In my experience, the right questions (i.e. the right choice of X, Y 
 Z) will most usually lead the poster to the solution.

Stroller.




Re: [gentoo-user] {OT} LWP::UserAgent slows website

2013-02-11 Thread Michael Mol
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/11/2013 08:05 PM, Stroller wrote:
 
 On 12 February 2013, at 00:04, Michael Mol wrote:
 I am sorry if I have caused you offence on any other occasion -
 if so, please feel free to explain why.
 
 Primarily, what bothers me is your typically acerbic tone, and
 that your posts often (at least to my perception) carry more
 pejorative than useful information.
 
 I have always attempted the very opposite.
 
 I'm a little shocked, and will attempt to reassess with fresh eyes
 before posting in the future.
 
 I can only hope you may have confused me with someone else.
 
 I will occasionally make a terse response to a problem, asking no
 more than have you checked X? what does /var/log/Y say? please
 post the output of `exec-Z`. In my experience, the right questions
 (i.e. the right choice of X, Y  Z) will most usually lead the
 poster to the solution.
 
 Stroller.

I sincerely apologize. I will try to read your messages more clearly
in the tone they're obviously intended. Perhaps I do have you confused
with someone else. I hope so...either way, I apologize.
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.19 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJRGZiRAAoJED5TcEBdxYwQOfcIALZvCUe0G8yCNjkCZc57C6OW
ZwQLXErz+vPSo3U8FomwNrFFUVC5L726msPB6aKkuAZUSIA51Q0PLwLItxOJP2VJ
LwhmDyskbaqrYj1WIhmb7ASabGovpzo0GIOgvJuC2n/srAmb3qBeqlag9Zy/WwFt
miIwjXNqH1Nd0d6HlpX/O3f9kL1TBoohcUC4AwsQKWJfClohzkMalyls+OAWUs/r
5DD4nOv/53WjPLyVaKgeoNqPSaprAvuU2Em16y8ThUIrf2z/idxO+tUid4PfKscZ
s5GBxyDSqg+hzYyDQpfwx49ks7/NS9bvC/cIZKU0jeXhO+hXCOMl3Kzxu1ZDUEs=
=VPZ/
-END PGP SIGNATURE-



Re: [gentoo-user] {OT} LWP::UserAgent slows website

2013-02-10 Thread Stroller

On 10 February 2013, at 05:05, Grant wrote:
 ...
 Your server is just a single computer, running multiple processes.
 Each request from a user (be it you or someone else) requires a
 certain amount of resources while it's executing. If there aren't
 enough resources, some of the requests will have to wait until enough
 others have finished in order for the resources to be freed up.
 
 Here's where I'm confused.  …   The responses are
 received and displayed within about 3 seconds of when the requests are
 made.  … , I don't understand
 why apache2 seems to bog down a bit for about 10 minutes afterward.

Seriously, after finishing Mr Mol's wall-of-text (learn to snip, Grant!) I 
wondered if he'd even read your question!

Stroller.




Re: [gentoo-user] {OT} LWP::UserAgent slows website

2013-02-09 Thread Adam Carter
 There are several things you can do to improve the state of things.
 The first and foremost is to add caching in front of the server, using
 an accelerator proxy. (i.e. squid running in accelerator mode.) In
 this way, you have a program which receives the user's request, checks
 to see if it's a request that it already has a response for, checks
 whether that response is still valid, and then checks to see whether
 or not it's permitted to respond on the server's behalf...almost
 entirely without bothering the main web server. This process is far,
 far, far faster than having the request hit the serving application's
 main code.



I was under the impression that Apache coded sensibly enough to handle
incoming requests as least as well as Squid would. Agree with everything
else tho.

OP should look into what's required on the back end to process those 6
requests, as it superficially appears that a very small number of requests
is generating a huge amount of work, and that means the site would be easy
to DoS.


Re: [gentoo-user] {OT} LWP::UserAgent slows website

2013-02-09 Thread Michael Mol
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/09/2013 05:36 AM, Adam Carter wrote:
 
 There are several things you can do to improve the state of
 things. The first and foremost is to add caching in front of the
 server, using an accelerator proxy. (i.e. squid running in
 accelerator mode.) In this way, you have a program which receives
 the user's request, checks to see if it's a request that it already
 has a response for, checks whether that response is still valid,
 and then checks to see whether or not it's permitted to respond on
 the server's behalf...almost entirely without bothering the main
 web server. This process is far, far, far faster than having the
 request hit the serving application's main code.
 
 
 
 I was under the impression that Apache coded sensibly enough to
 handle incoming requests as least as well as Squid would. Agree
 with everything else tho.

Sure, so long as Apache doesn't have any additional modules loaded. If
it's got something like mod_php loaded (extraordinarily common),
mod_perl or mod_python (less common, now) then the init time of
mod_php gets added to the init time for every request handler.


 OP should look into what's required on the back end to process
 those 6 requests, as it superficially appears that a very small
 number of requests is generating a huge amount of work, and that
 means the site would be easy to DoS.

Absolutely, hence the steps I outlined to reduce or optimize backend
processing.
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.19 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJRFlRSAAoJED5TcEBdxYwQ7BwH/Aj3hgQgGjzBoQhlZqPKDzEW
pZJJVcVf4CF4sk88el8X/hPMfx2cTpuM53tLDsv3KGR1dwjP48O2oiiTubH/HRxI
lNR5I22QK2YEbLzeRTZN+pkpGnyA1W+d3kF7F9aiNXVUV8KyuyxSxx+7Xm1tRW/W
xcNhSLTQIpyTAx+R9MGNkJFs0gFGFgIMML4bfi5BpIrbeeVWsoe1C0syFF+HIFWP
WZRtsCFhdWrZkvKUYIBkoFq9VKkSTt13eIvrPjxFUVJwFSmntxSgfqiaZxfHXp5A
oSLtyz0vR6qByoivkuilNK7sI3fK8fHA0q4XF1AUaOuwcHg9AFG9pCFBUF2KOgk=
=R/kD
-END PGP SIGNATURE-



Re: [gentoo-user] {OT} LWP::UserAgent slows website

2013-02-09 Thread Adam Carter
Sure, so long as Apache doesn't have any additional modules loaded. If

 it's got something like mod_php loaded (extraordinarily common),
 mod_perl or mod_python (less common, now) then the init time of
 mod_php gets added to the init time for every request handler.


Interesting, so if you have to use mod_php you'd probably be better off
running Worker than Prefork, and you'd want to keep MaxConnectionsPerChild
on the higher side, to reduce init work you've mentioned, right? May also
help to verify that KeepAlive is on and tweak MaxKeepAliveRequests a little
higher.


Re: [gentoo-user] {OT} LWP::UserAgent slows website

2013-02-09 Thread Michael Mol
On Feb 9, 2013 9:26 PM, Adam Carter adamcart...@gmail.com wrote:

 Sure, so long as Apache doesn't have any additional modules loaded. If

 it's got something like mod_php loaded (extraordinarily common),
 mod_perl or mod_python (less common, now) then the init time of
 mod_php gets added to the init time for every request handler.


 Interesting, so if you have to use mod_php you'd probably be better off
running Worker than Prefork, and you'd want to keep MaxConnectionsPerChild
on the higher side, to reduce init work you've mentioned, right? May also
help to verify that KeepAlive is on and tweak MaxKeepAliveRequests a little
higher.

Can't; mod_php isn't compatible with mpm_worker. You have to use a
single-threaded mpm like prefork or itk.

Anyway, you're starting to get the idea why you want a caching proxy in
front of apache.


Re: [gentoo-user] {OT} LWP::UserAgent slows website

2013-02-09 Thread Adam Carter

 Can't; mod_php isn't compatible with mpm_worker. You have to use a
 single-threaded mpm like prefork or itk.

 Anyway, you're starting to get the idea why you want a caching proxy in
 front of apache.

Indeed. Thanks for your comments.


Re: [gentoo-user] {OT} LWP::UserAgent slows website

2013-02-09 Thread Grant
 The responses all come back successfully within a few seconds.
 Can you give me a really general description of the sort of problem
 that could behave like this?

 Your server is just a single computer, running multiple processes.
 Each request from a user (be it you or someone else) requires a
 certain amount of resources while it's executing. If there aren't
 enough resources, some of the requests will have to wait until enough
 others have finished in order for the resources to be freed up.

Here's where I'm confused.  The requests are made via a browser and
the response is displayed in the browser.  There is no additional
processing besides the display of the response.  The responses are
received and displayed within about 3 seconds of when the requests are
made.  Shouldn't this mean that all processing related to these
transactions is completed within 3 seconds?  If so, I don't understand
why apache2 seems to bog down a bit for about 10 minutes afterward.

- Grant


 To really simplify things, let's say your server has a single CPU
 core, the queries made against it only require CPU consumption, not
 disk consumption, and the queries you're making require 3s of CPU time.

 If you make a query, the server will spend 3s thinking before it spits
 a result back to you. During this time, it can't think about anything
 else...if it does, the server will take as much longer to respond to
 you as it takes thinking about other things.

 Let's say you make two queries at the same time. Each requires 3s of
 CPU time, so you'll need a grand total of 6s to get all your results
 back. That's fine, you're expecting this.

 Now let's say you make a query, and someone else makes a query. Each
 query takes 3s of CPU time. Since the server has 6s worth of work to
 do, all the users will get their responses by the end of that 6s.
 Depending on how a variety of factors come into play, user A might see
 his query come back at the end of 3s, and user B might see his query
 come back at the end of 6s. Or it might be reversed. Or both users
 might not see their results until the end of that 6s. It's really not
 very predictable.

 The more queries you make, the more work you give the server. If the
 server has to spend a few seconds' worth of resources, that's a few
 seconds' worth of resources unavailable to other users. A few seconds
 for a query against a web server is actually a huge amount of time...a
 well-tuned application on a well-tuned webserver backed by a
 well-tuned database should probably respond to the query in under
 50ms! This is because there are often many, many users making queries,
 and each user tends to make many queries at the same time.

 There are several things you can do to improve the state of things.
 The first and foremost is to add caching in front of the server, using
 an accelerator proxy. (i.e. squid running in accelerator mode.) In
 this way, you have a program which receives the user's request, checks
 to see if it's a request that it already has a response for, checks
 whether that response is still valid, and then checks to see whether
 or not it's permitted to respond on the server's behalf...almost
 entirely without bothering the main web server. This process is far,
 far, far faster than having the request hit the serving application's
 main code.

 The second thing is to check the web server configuration itself. Does
 it have enough spare request handlers available? Does it have too
 many? If there's enough CPU and RAM left over to launch a few more
 request handlers when the server is under heavy load, it might be a
 good idea to allow it to do just that.

 The third thing to do is to tune the database itself. MySQL in
 particular ships with horrible default settings that typically limit
 its performance to far below the hardware you'd normally find it on.
 Tuning the database requires knowledge of how the database engine
 works. There's an entire profession dedicated to doing that right...

 The fourth thing to do is add caching to the application, using things
 like memcachedb. This may require modifying the application...though
 if the application has support already, then, well, great.

 If that's still not enough, there are more things you can do, but you
 should probably start considering throwing more hardware at the problem...



Re: [gentoo-user] {OT} LWP::UserAgent slows website

2013-02-08 Thread Grant
 A little more infromation would help. like what webserver, what kind of
 requests, etc

 -Kevin

It's apache and the requests/responses are XML.  I know this is
pathetically little information with which to diagnose the problem.
I'm just wondering if there is a tool or method that's good to
diagnose things of this nature.

- Grant


 I have a script that makes 6 successive HTTP requests via
 LWP::UserAgent.  It runs fine and takes only about 3 seconds, but
 whenever it is run I start receiving alerts that my website is
 responding slowly to requests.  This lasts for up to around 10
 minutes.  I've tried turning the timeout down to 3 seconds and I've
 tried LWPx::ParanoidAgent but the behavior is the same.

 Can anyone tell me how to go about tracking this down?

 - Grant






Re: [gentoo-user] {OT} LWP::UserAgent slows website

2013-02-08 Thread Michael Mol
On Fri, Feb 8, 2013 at 5:10 PM, Grant emailgr...@gmail.com wrote:
 A little more infromation would help. like what webserver, what kind of
 requests, etc

 -Kevin

 It's apache and the requests/responses are XML.  I know this is
 pathetically little information with which to diagnose the problem.
 I'm just wondering if there is a tool or method that's good to
 diagnose things of this nature.

The problems are server-side, not necessarily client-side. Your
optimizations are going to need to be performed there.

-- 
:wq



Re: [gentoo-user] {OT} LWP::UserAgent slows website

2013-02-08 Thread Grant
 A little more infromation would help. like what webserver, what kind of
 requests, etc

 -Kevin

 It's apache and the requests/responses are XML.  I know this is
 pathetically little information with which to diagnose the problem.
 I'm just wondering if there is a tool or method that's good to
 diagnose things of this nature.

 The problems are server-side, not necessarily client-side. Your
 optimizations are going to need to be performed there.

Are you saying the problem may lie with the server to which I was
making the request?  The responses all come back successfully within a
few seconds.  Can you give me a really general description of the sort
of problem that could behave like this?

- Grant



Re: [gentoo-user] {OT} LWP::UserAgent slows website

2013-02-08 Thread Michael Mol
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/08/2013 09:39 PM, Grant wrote:
 A little more infromation would help. like what webserver,
 what kind of requests, etc
 
 -Kevin
 
 It's apache and the requests/responses are XML.  I know this is
  pathetically little information with which to diagnose the 
 problem. I'm just wondering if there is a tool or method
 that's good to diagnose things of this nature.
 
 The problems are server-side, not necessarily client-side. Your 
 optimizations are going to need to be performed there.
 
 Are you saying the problem may lie with the server to which I was 
 making the request?

Yes.

 The responses all come back successfully within a few seconds.
 Can you give me a really general description of the sort of problem
 that could behave like this?

Your server is just a single computer, running multiple processes.
Each request from a user (be it you or someone else) requires a
certain amount of resources while it's executing. If there aren't
enough resources, some of the requests will have to wait until enough
others have finished in order for the resources to be freed up.

To really simplify things, let's say your server has a single CPU
core, the queries made against it only require CPU consumption, not
disk consumption, and the queries you're making require 3s of CPU time.

If you make a query, the server will spend 3s thinking before it spits
a result back to you. During this time, it can't think about anything
else...if it does, the server will take as much longer to respond to
you as it takes thinking about other things.

Let's say you make two queries at the same time. Each requires 3s of
CPU time, so you'll need a grand total of 6s to get all your results
back. That's fine, you're expecting this.

Now let's say you make a query, and someone else makes a query. Each
query takes 3s of CPU time. Since the server has 6s worth of work to
do, all the users will get their responses by the end of that 6s.
Depending on how a variety of factors come into play, user A might see
his query come back at the end of 3s, and user B might see his query
come back at the end of 6s. Or it might be reversed. Or both users
might not see their results until the end of that 6s. It's really not
very predictable.

The more queries you make, the more work you give the server. If the
server has to spend a few seconds' worth of resources, that's a few
seconds' worth of resources unavailable to other users. A few seconds
for a query against a web server is actually a huge amount of time...a
well-tuned application on a well-tuned webserver backed by a
well-tuned database should probably respond to the query in under
50ms! This is because there are often many, many users making queries,
and each user tends to make many queries at the same time.

There are several things you can do to improve the state of things.
The first and foremost is to add caching in front of the server, using
an accelerator proxy. (i.e. squid running in accelerator mode.) In
this way, you have a program which receives the user's request, checks
to see if it's a request that it already has a response for, checks
whether that response is still valid, and then checks to see whether
or not it's permitted to respond on the server's behalf...almost
entirely without bothering the main web server. This process is far,
far, far faster than having the request hit the serving application's
main code.

The second thing is to check the web server configuration itself. Does
it have enough spare request handlers available? Does it have too
many? If there's enough CPU and RAM left over to launch a few more
request handlers when the server is under heavy load, it might be a
good idea to allow it to do just that.

The third thing to do is to tune the database itself. MySQL in
particular ships with horrible default settings that typically limit
its performance to far below the hardware you'd normally find it on.
Tuning the database requires knowledge of how the database engine
works. There's an entire profession dedicated to doing that right...

The fourth thing to do is add caching to the application, using things
like memcachedb. This may require modifying the application...though
if the application has support already, then, well, great.

If that's still not enough, there are more things you can do, but you
should probably start considering throwing more hardware at the problem...
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.19 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJRFbtnAAoJED5TcEBdxYwQNiAH/18rSripzwl6DjK/lePRl9GI
LjOqarZ5XmW7lhfWwLajQfbfYXCcA6iEmrlxRZwIm039zIuvcuAIC1dLW64IYeyR
OMWppXTDo4dqpOYusPIcOVFvBECJGdU59ONOf2iHR5qUTwi2+Dip1DY5nFZLQjvD
zuDE418npqzm2ENaFpGM5SWAs7r/CvE4TiRWaZ2wZrHZrf36cXeT2miK/SFm33ZI
9rCqo8MKj8tw36i3M0lu9JvTTWPgbAJ43AKDxyYsEa3DZzbiBS9GK5pHl0XClVQK
by6uhmlxcdldcddu8vqPoLv45gfS2EYO3Oc0rZ9pAVOq5kJUlsmzSEq3NWcymEA=
=vSDC
-END PGP SIGNATURE-



Re: [gentoo-user] {OT} LWP::UserAgent slows website

2013-02-07 Thread Kevin Brandstatter
A little more infromation would help. like what webserver, what kind of
requests, etc

-Kevin

On 02/06/2013 07:13 PM, Grant wrote:
 I have a script that makes 6 successive HTTP requests via
 LWP::UserAgent.  It runs fine and takes only about 3 seconds, but
 whenever it is run I start receiving alerts that my website is
 responding slowly to requests.  This lasts for up to around 10
 minutes.  I've tried turning the timeout down to 3 seconds and I've
 tried LWPx::ParanoidAgent but the behavior is the same.

 Can anyone tell me how to go about tracking this down?

 - Grant





signature.asc
Description: OpenPGP digital signature


[gentoo-user] {OT} LWP::UserAgent slows website

2013-02-06 Thread Grant
I have a script that makes 6 successive HTTP requests via
LWP::UserAgent.  It runs fine and takes only about 3 seconds, but
whenever it is run I start receiving alerts that my website is
responding slowly to requests.  This lasts for up to around 10
minutes.  I've tried turning the timeout down to 3 seconds and I've
tried LWPx::ParanoidAgent but the behavior is the same.

Can anyone tell me how to go about tracking this down?

- Grant