On 29/11/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
ok now I really understood the 90%Time! I thought that the 90% is the average 
response time of the fastest 90% of the samples. I think this explaination 
should be added to the manual!

I have the problem that I have some really high response times about ~30min. 
This are only a few samples, but enough to higher the average time. So now, I 
am not possible to use the average time cause the average times are falsified.

The average time is still correct, but it may be misleading.

So the best would be to calculate something like Gaussian time. So that you 
take only 90% of your samples to calculate the average time. The first 5% and 
the last 5% (I mean the 5% of the samples with the lowest and the 5% with the 
highest response time) you cut and delete.
But how? Do you have any ideas how to do this jop?

Adding Gaussian averaging would be possible, of course, but you should
be aware that the calculation would require storing all the values for
the entire test run.

Ordinary averaging does not require all values to be stored.

If you need this information now, then I suggest you save the data to
a CSV file, and use a spreadsheet to do the calculations.

But feel free to create a Bugzilla enhancement request to add this feature.
It could perhaps be added to the Aggregate Report as an optional column.

An alternative enhancement might be to add optional min/max times to
the Summary Report - such samples could either be ignored or assumed
to be at the appropriate limit. This would be cheap in memory terms,
but would not be the same as ignoring the top/bottom 5%.

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to