At 2004-11-08T11:12:21+1300, Timothy Pick wrote:

> Reminds me of this job interview I went to a few weeks ago, where the
> guy said to me "if it is running too slow  (talking about their
> product), our customers are the kind where they'll just go out and buy
> a new server... so we don't bother optimising because it is not worth
> our time."

That makes perfect sense, for the common case.  Optimisation is usually
slow, expensive work, it can introduce additional bugs, and (worst of
all, in some ways) can end up performing worse when the customer
upgrades to more modern hardware.

The costs become more obvious when you consider that a server or cluster
configuration, for example, will be designed to accomodate the
customer's existing load, plus account for n years growth at the current
estimated growth rate.  By the time the customer is ready to upgrade
their hardware, the speed of the available machines has usually
increased significantly... unless they're stuck on almost impossible to
upgrade legacy hardware for whatever reason.

For most applications, spending time to find a 10% performance
improvement is a complete waste of time--it really is cheaper to buy a
faster CPU or more memory.  But that's not true for every case, of
course.

What is important, though, is knowing what operations are expensive, and
in what cases, and either knowing and ensuring those cases will never
occur in the product, or changing the basic approach to try and find a
simple but more efficient way to perform that operation.  And remember:

"Premature optimisation is the root of all evil."
 -- C. A. R. Hoare (often attributed to D. E. Knuth)

Cheers,
-mjg
-- 
Matthew Gregan                     |/
                                  /|                [EMAIL PROTECTED]

Reply via email to