We're seeing an odd problem when load testing our site. Something's
probably misconfigured, but I don't know where to look.
As we ramp up the number of users with our load testing software, the
number of requests Jetty can handle tops out at about 100/second. (Our
app is doing some work to process each request, so the low rate isn't
Jetty's fault). As more and more requests come in, average page load
time creeps up to 6 or 7 seconds, max load time becomes very high, and
then all at once the system starts sending 503s. Any new requests get a
503, and the load testing software reports that it starts getting about
500 503s per second, presumably from earlier requests that were queued
up. This continues for about a minute, and then the system starts
accepting requests again.
So when a lot of users come in, some get serviced, some have to wait a
long time, and then whole system shuts down for a minute while it recovers.
I'd prefer not to have a catastrophic failure like this. Perhaps the
system should send 503s earlier so the queue never gets too big. Or
maybe it should never send a 503, and just make the wait times uniformly
longer for all users.
We're on Amazon, so we can use monitoring software to detect when page
load time is > X seconds and spin up more instances.
What parameters can we twiddle to control how Jetty degrades? Or have we
misconfigured things?
_______________________________________________
jetty-users mailing list
[email protected]
https://dev.eclipse.org/mailman/listinfo/jetty-users