On Feb 17, 2014, at 3:42 PM, Charles Mills wrote:
Starting a new thread .
----------------------------------
SNIP--------------------------------------------------------------
2. Much of the increase in speed has been due to increased numbers of
processors per box. That gives the customer "more MIPS" but it is
no help to
processes that are either inherently single-thread, or have been
implemented
in a way that makes them single thread, and must be completed
within some
finite window.
3. Most significantly, as machines have gotten faster, customers
have also
gotten much more cost conscious, and are very aware that every
instruction
brings them closer to the day that they have to upgrade the box,
which will
bring an inevitable major increase in the cost of IBM and non-IBM
software.
-------------------------------
SNIP----------------------------------------------------------------
Charles,
I am sort of in agreement but, about 30 years ago I worked at a
company that (at least one division) really went whole hog on the
idea of multithreading.
Their online system ran transaction on multiple (30+)TCB's . Their
production batch ran many many TCB's and they really used the system
in an efficient way.
As an example one of their production jobs each ran 6-10 (sometimes
only one) active TCB('s). It of course was all done in assembler.
Even coding as efficient as they did they pinned the needle at 100
percent on a multiprocessor (both sides) if there had been available
another processor they would have pinned that as well and perhaps
more (this is well guessed conjecture).
The problem was (at that time the speed of a 168-3 (MP)was the
fastest IBM computer that was available (at least to the public).
There were two major issues they always wanted more CPU and the code
was pretty much unintelligible to most programmers.
There was just no way that they could keep up with the CPU demand, it
was insatiable.
I do not remember if they were on a monthly cycle or their own (its
been a long time).
The point I am trying to make is that throwing cpu cycles at it does
not always solve the problem. Optimization did little for them as
they were CPU intensive.
They were always looking for ways to optimize their code the small
nibs that they gained were sucked up by other processes.
I think if IBM could have delivered one of their modern day systems
it would be sucked dry as well.
Lesson for me is that a well written process can only be be fed just
so much resources.
I would love to see what would have happened on a current processor
say a 64 way .
Ed
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN