On 8/29/2011 11:29 PM, Justin R. Bendich wrote:
Do any of you old-timers (e.g. Fairchild, Cole) know how the EDIT (ED) instruction came to be the way it is? It's one of the original IBM 360 instructions. Has to be the most complicated of them. Did they have microcode back then?
My understanding is that it was designed to meet business specifications - setting R1 (EDMK) allows for easy insertion of a $ or asterisk "protection" byte, and a sign after the numerics provides for common commercial usage of a CR or - after the number. Microcode was available on some machines. The design was elegant, and simple to implement.
The main thing that bugs me about this instruction is that if you want leading zeroes, you have to add a "fix up" instruction at the end, or prepend an extra zero by means of the fill character. This is because the significance starter does not turn on the S-trigger for that byte. Also, the way the fill character is set, where it has to be the first byte, bugs me.
It sounds as though you're complaining that you can't use it "out of the box" without careful consideration. If you want a field with leading zeroes, you can always use UNPK. What bugs me is that the TRAP instructions fail unless the environment is set up for them; had IBM made them effective NoOps, they could have been used for debugging LPA resident code.
Why is it like that? It IS a useful instruction, but it's annoying.
It's no more annoying than CVD generating x'0C' rather than x'0F' as a sign, or any number of other instructions that could have been done differently. In these days of gigabyte regions, it's trivial to write subroutines and macros that do what you want, not how the machine is designed. And the 360 was a far cry from the 700 series, whose instruction set was a lot simpler, but required more code to accomplish anything. After working on a 7094 and predecessors for several years, I always marveled at the 360 instruction set, and the thought that went into it. Gerhard Postpischil Bradford, VT
