The original poster of this thread wished to speculate about z/OS 
enhancements and gave as an example only, some possibilities for improving 
JCL.

Concerning user enhancements and there role in the design process I pointed 
out the principal activity of any data processing system was to process data 
and then went on to discuss this concept.

In case I wasn't clear the first time around user enhancement requests play 
almost no role in the design and build of new products and or versions.  New 
products and or versions are historically driven by GENERAL requirements for 
data processing espically database.
In todays "IBM 360" world interopererability is also very important as a way to 
slow the shrinking market.

User enhancement considerations typically come into play at delivery 
packaging time, not design or build.  It is sort of a dirty secret software 
life 
cycle fact.  At delivery packaging time a marketing person talks to the 
development team and asks how many enhancement requests can be said to 
be satisified in the announcement letter.  This is followed by an annoying 
disruptive 1/2 day effort to make up the number and provide some supporting 
documention.  I have worked for many software development companies and 
in general (there are a few exceptions) this is the way the user enchancement 
business works.
Satisfying user enhancement requests is a by product of more general design 
objectives.


I found the original posters example JCL enhancements to be funny in light of 
my many years with Amdahl Corp and Gene Amdahl's famous statement "No 
one would buy a bigger, faster, more cost effective processor if it ment 
rewritting JCL"

I provided some back ground on the statement rather than just the punch line.

There is a valid concern that some of the minute details of the back ground 
my be incorrect.  I am not a hardware engineer and what I say about 
hardware is subject to change and revision.

I would like to speak about the following

>>Now the IBM 360 was the worlds worse instruction set for pipelineing.
>
>Bold assertion.  Do you have any data to back that up?
>


Perhaps the phrase "worlds worse" is to extreem but the 360 and follow on 
instruction sets being very bad for pipelineing is an easy to deduce fact.

Pipelining is based on two major assumptions.
1. We know what the next instruction to be executed is
2. every instruction can be broken down into a predetermined number of 
execution phases and each of these phases requires a consistant amout of 
resources to process.
When either of these assumptions is not true even for a moment the pipe 
stalls or fails and needs to be restarted.  At least one cycle and perhaps n-1 
cycles are lost where N is the number of phases in the pipe.

Failures in the first assumption are somewhat common regardless of the 
instruction set,  Some causes for these failures are
a. branch instruction (next instruction location can change)
b. test instruction (next instruction location can change)
c. Supervisor call (like SVC) (next instruction location can change)
d, interupt like external or timer pop (next instruction location can change)

Failures in the second assumption are instruction set dependent.  If as the 
case on many types of hardware all instructions are register to storage (RX) 
or even RX and register to register (RR) every thing could be fine but once 
you start introducing weird instruction processing types like the 360 storage 
to storage (SS) ones everything goes haywire when these things appear in 
mix.

So why is the 360 instruction set amoung the worst for pipelineing.
Not only is it exposed to the normal and expected failures of assumption 1 but 
it is also and somewhat indivdually exposed to failures of assumption 2.

It is not that everything is bad ... the unusual instruction set contributed to
1. The growth of multi processors.
2, the development of reduced instruction set machines.
3. A better understanding of benchmarks where the test cases can be 
minipulated to improve results say by avoiding branch instructions.
4. And even changes to spoken lanuage.  The phase "Short and Concise" was 
once a popular comentary on code
    Short because memory for running code was rare.
    Concise because conditionals like tests and branches disrupted pipelineing.

In my first posting I made a comment about Ford (a new car rolls of the 
assembley line every 15 seconds ...)
As some of you may know Ford is sometimes given credit for inventing the 
assembley line which could also be called a pipeline.  As  I understand it an 
assembley line / pipeline is only as fast as its slowest workstation.  In 
building 
a car painting is the slowest workstation.  This leads me to a famous Ford 
quote "You can have it any colur you want as long as it is black"

Best wishes
Avram Friedman

On Wed, 12 Jan 2011 15:00:52 -0600, Tom Marchant <m42tom-
[email protected]> wrote:

>
>>Hardware design changed from
>>processing one instruction per cycle to pipelining, ie processing parts of
>>several instructions ever cycle ...
>
>I don't know when processors started to pipeline instructions.  The
>360-91 had a pipelined processor.  Early 360 machines did not.
>
>>Now the IBM 360 was the worlds worse instruction set for pipelineing.
>
>Bold assertion.  Do you have any data to back that up?
>
>>So why did Amdahl design a plug compatable copy of what was then a 
known
>>bad design?

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to