My understanding was that using signed binary numbers made COBOL more efficient
for arithmetic operations: L(H) a register, do the arithmetic and ST(H) the
result. Unsigned binary meant L(H) a register, force it positive, do the
arithmetic, force the result positive, ST(H) the result. (I'm
On Jun 16, 2022, at 10:43:36, Robin Vowels wrote:
Computers have had instructions for signed and unsigned binary
since at least 1951. When negative values are expressed using
twos complement notation, ordinary addition will give the same
result whether the operation is signed or unsigned.
It
Steve Smith (no known relation) wrote:
>Every coding standard should document exactly why the standard exists, i.e.
>what benefit it provides. That might help filter out, and allow for
>updating, of some long-gone person's personal preferences (which is where
>too many coding standards come
On 2022-06-17 00:36, Schmitt, Michael wrote:
My company's COBOL coding standards are* to define binary fields as
signed (e.g. PIC S9(4) BINARY). I'm wondering why that's the standard.
The original standards were developed at least 40-60 years ago. They
were revised in 1994 but the signed binary
Unsigned binary arithmetic goes back at least to the 704 in 1955. I suspect
that it goes back farther. There was no concept of halfword at the time; it was
all 36-bit words.
Logical AND, OR, etc. have comparable antiquity.
--
Shmuel (Seymour J.) Metz
http://mason.gmu.edu/~smetz3
On Thu, 16 Jun 2022 14:36:12 + "Schmitt, Michael"
wrote:
:>My company's COBOL coding standards are* to define binary fields as signed
(e.g. PIC S9(4) BINARY). I'm wondering why that's the standard.
Because it takes extra instructions to get the absolute value.
--
Binyamin Dissen
Specifying 9(4) or S9(4) can have a maximum value of 9,999 (i.e. 4 decimal
digits), depending on the chosen TRUNC compiler option.
TRUNC(BIN) means to truncate at the halfword or fullword. But we use
TRUNC(OPT), which means "do whatever is the most efficient", which can truncate
at .
So
ADD LOGICAL and SUBTRACT LOGICAL were part of the original System/360, and are
documented in the A22-6821-0 edition of the System/360 Principles of Operation,
as well as in "Architecture of the IBM System/360", published in the IBM
Journal in April, 1964, which describes the reasoning for many
There were already logical instructions as early as the 360 machine series.
However, early COBOL compilers (and even up to Enterprise V4) implemented the
COBOL standard for numeric values by converting unsigned binary values to
packed decimal and zeroing out any integer digits left of 4 digits
Many logical instructions -- fullword and character anyway -- go back to the
very first System 360's.
I've got a System 370 Yellow Card here and it includes AL, ALR, CL, CLR, SL
and SLR as System 360 instructions.
Charles
-Original Message-
From: IBM Mainframe Assembler List
"IBM Mainframe Assembler List" wrote on
06/16/2022 10:36:12 AM:
> Or it could be that whatever version of COBOL was used then (OS/VS
> COBOL or earlier) was more efficient with signed binary, such as due
> to the choices it made in instruction selection.
My understanding, at least for
The logical instructions were in there from the get-go. I have no idea
what the implications were or are for COBOL.
Every coding standard should document exactly why the standard exists, i.e.
what benefit it provides. That might help filter out, and allow for
updating, of some long-gone
My company's COBOL coding standards are* to define binary fields as signed
(e.g. PIC S9(4) BINARY). I'm wondering why that's the standard.
The original standards were developed at least 40-60 years ago. They were
revised in 1994 but the signed binary guidance remained.
One explanation could be
13 matches
Mail list logo