Wed, 23 Dec 2009 17:04:49 -0800, Walter Bright wrote: > bearophile wrote: >> You are right. It's not easy to give average numbers for any kind of C >> or C++ software. In benchmark-like code I've seen up to 20-25% >> improvements, but I assume that in much larger programs the situation >> is different. Probably if you try to compute a true average, the >> average percentage of improvement is lower, like 5% or less. It's a >> feature useful for hot spots of the code. > > > Small benchmarks tend to have a high 'beta', or variance from the norm. > The results in actual applications tend to be much closer together.
It's difficult to measure performance improvements overall in applications like image manipulation software or sound wave editors. E.g. if a complex effect processing takes now 2 seconds instead of 4 hours, but all GUI event processing is 100% slower, during the workday the application might only work 10% faster overall. The user spends much more time in the interactive part of the code. From what I've read, bearophile mostly only uses synthetic tests.
