Starting a new thread .

It seems to me that as the hardware has gotten faster and faster, it is
tempting to think that optimization and CPU time no longer matter. I think
three things have conspired to make that thought not true:

1. Of course as hardware has gotten faster and faster, transaction volumes
have gotten greater. The one instruction that used to get executed 100,000
times a day now gets executed a million times a day.

2. Much of the increase in speed has been due to increased numbers of
processors per box. That gives the customer "more MIPS" but it is no help to
processes that are either inherently single-thread, or have been implemented
in a way that makes them single thread, and must be completed within some
finite window.

3. Most significantly, as machines have gotten faster, customers have also
gotten much more cost conscious, and are very aware that every instruction
brings them closer to the day that they have to upgrade the box, which will
bring an inevitable major increase in the cost of IBM and non-IBM software.

It would be an interesting exercise to try to figure out an estimated dollar
cost for a million instructions executed per day, using an assumed typical
installation and an assumed typical mix of IBM and non-IBM software -- on
the assumption that those million instructions represent x% of the need to
upgrade to a faster box, and the increase in costs, hardware and software,
due to that upgrade.

Charles 

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to