In very simple terms (and I do mean very simple) if you have a graph showing response times over time then 'good' means level, steady response times that do not change; 'bad' is when they go up. This is pretty simple, right? The problem a lot of people new to load testing have is that they do not set proper objectives so they are unable to decide if they should stamp things with a PASS or a FAIL. (See rule numero uno in my last post.)
Think about it. If you run a load test with 1 user making 1 request every minute then the response times will most likely always be the same. If you run a test with 2 million users making 500,000 requests per second then, probably, you will see response times rather high, and a fair few errors to boot. It's not that hard really, just decide on the point inbetween those two load levels, and then *stick to it*. Use a Constant Throughput Timer to do this. Seriously, I really mean it, use this control. How you decide on the level that is right for your project requires talking to people. Tip: The trick is to make an estimate of the peak volume of traffic this site will actually get after it goes live. Don't underestimate this step; most sites either fail post launch or are needlessly delayed because someone got this wrong. There you go, performance testing in four paragraphs. ----- http://www.http503.com/ -- View this message in context: http://jmeter.512774.n5.nabble.com/Timer-how-to-delay-the-HTTP-request-in-the-thread-tp5488599p5500374.html Sent from the JMeter - User mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
