> We were having a discussion on how to measure the performance 
> of web servers and was wondering if anybody had come up with 
> a valid way to do this?
> 
> For example, you can determine how good a 3D graphics card is 
> based on number of polygons and refresh rates at different 
> resolutions. So in theory it should be possible to get a Web 
> Hosting company to quote what 'delivery' level over a 20 second 
> period (here as known as a Reynolds) you are guaranteed to get.
> 
> For example you could define the ability to deliver web content 
> based on the server (which would be measured assuming a 100MB or 
> higher connection) in k-reynolds. You would need to define a 
> standard page (say 30k with 19 graphics). 
> 
> You would then have a to consider 'reynolds division factors'. 
> Number of websites, server connection to the web (any others????).
> 
> You would then need to consider other 'reynolds division factors'. 
> The type of application servers that were on the host as well 
> (ColdFusion, PHP, ASP etc). 
> 
> You could then come up with a Reynolds Index for a specific 
> machine and could possibly get a guarantee from a hosting company 
> to indicate that the reynolds index for a hosted web site would 
> not go above a certain 'factor'. 
> 
> What do you think? Would this be workable?
> 
> Adam 'Reynolds' (of course!)

Unfortunately for the future of the "Reynolds" as a unit of server
performance measurement, there are already existing measurement scales in
use for server performance.

There are two ways you can measure server performance; how fast that server
can respond to an individual request, and how much data the server can
output. These two metrics, response time and throughput, aren't necessarily
synchronized; it's possible to improve one at the expense of the other.
Response time is measured in seconds or milliseconds, throughput can be
measured by data over time: HTTP responses per second, KB output per second,
etc.

To perform the testing, you use load test tools, such as Segue
SilkPerformer. The reporting piece of SilkPerformer breaks down the numbers
pretty well, making the testing process relatively easy.

Of course, it's more complicated than that. Response time covers the entire
time between the request and the response, so it includes CF's processing
(which itself includes database processing), how long the request sat in the
request queue before CF got around to processing it, and the time it took to
transfer the request and response between browser and server. The
CF-specific portions can be measured through performance counters.

You'll also want to separate web server performance from application server
performance. These can be separated quite easily by testing static files or
scripts, but not both.

Now, you won't be able to get to the level of detail you've mentioned above,
where you can essentially define a unit of work, then measure how many units
of work you can produce from a specific server. You're always going to have
to provide very limited circumstances; for example, load testing an
application that will end up on a shared server won't tell you very much,
because you'll have an unknown amount of work done for other applications on
that server. Likewise, knowing that there's a PHP application on the same
server, by itself, won't be very useful; it's more important to know how
much that application will take from the server as a whole. You could, of
course, simply load test all of the applications on the server
simultaneously, but I suspect that most shared hosts aren't going to shell
out that kind of money.

The thing to remember across all of this, though, is that performance is
going to be a very complex variable, that you can't carry across
environments very well. The performance of a specific application will
depend on so many things, that it's hard to make a meaningful statement
about its performance without replicating the environment in which that
application runs. This is why it's generally recommended that you build an
exact replica of your production environment for staging and load test
purposes.

Dave Watts, CTO, Fig Leaf Software
http://www.figleaf.com/
voice: (202) 797-5496
fax: (202) 797-5444
------------------------------------------------------------------------------
Archives: http://www.mail-archive.com/[email protected]/
To Unsubscribe visit 
http://www.houseoffusion.com/index.cfm?sidebar=lists&body=lists/cf_talk or send a 
message to [EMAIL PROTECTED] with 'unsubscribe' in the body.

Reply via email to