I agree with you entirely.. If I was trying to get a real world TPC measurement..Testing a querry by doing 2000 identical querries and then deviding the total by 2000 is not a valid way to measure the time to do one querry. The result will appear to be as much as 100X or even more to fast.
The reason is:
1) Operating system will have cached the exact disk sectors required resulting in zero disk access time to 1999 of the querries.
2) If not the above the DBMS will have it's own cache
3) Some DBMSes will cache the "querry plan" so even the internal time to process the SQL into a list of lower level actions will be reduced for 1999 of the querries.
A more realistic test would use multiple processes to 1) Do normal "server stuff" to keep the OS-level caches flushed 2) Do background writes to the DBMS, say loggig CDR data at a realistic rate 3) The test program that does 2000 __random__ test querries with a small, realistic delay between each.
Next you'd devide the total time by 2000 and then subtract the one or two second delay you introduced out. I'll bet a beer the result is longer than 2ms.
An even more realistic test would run four or five copies of step #3 above concurently. In the real world with MySQL the biggest constraint of performance and scaleability is due to table locking and a very simple test will ignore this single largest factor.
The comparison was to compare the speed of one query against another.. They were both run in the same way on the same hardware so both would have gained or lost based on the same factors.. I really wasn't concerned with the actual total transaction time it at this point, that will come later and will involve a number of queries to complete the operation, it was more to determine which single query completed faster..
but thanks for the advice on a testing procedure it will be usedful later..
_______________________________________________ Asterisk-Users mailing list [EMAIL PROTECTED] http://lists.digium.com/mailman/listinfo/asterisk-users
