On Fri, 16 Jun 2017 12:55:53 -0500, Paul Gilmartin
([email protected]) wrote about "Re: RFE?
xlc compile option for C integers to be "Intel compat" or Little-Endian"
(in <[email protected]>):

> On Fri, 16 Jun 2017 16:43:38 +0100, David W Noon wrote:
>> ...
>> This is not the way computers do arithmetic. Adding, subtracting, etc.,
>> are performed in register-sized chunks (except packed decimal) and the
>> valid sizes of those registers is determined by architecture.
>>
> I suspect programmed decimal arithmetic was a major motivation for
> little-endian.

AFAIAA, there are no little-endian platforms that perform decimal
arithmetic as such, except on a byte-by-byte basis in a loop.

The nearest I can offer is the Intel 80x87 FPU. This can load a packed
decimal number [in little-endian order] into a floating point register,
converting to IEEE binary floating point as it goes; reflexively, it can
store a binary floater into packed decimal. However, all arithmetic is
done as floating point.

In fact, I have seen only 2 hardware platforms that perform packed
decimal arithmetic: IBM and plug-compatible mainframes; Groupe Bull /
Honeywell-Bull / Honeywell/GE / General Electric mainframes derived from
the GE-600 series -- although these did not get packed decimal until
they became the Honeywell H-6000 series.

>> In fact, on little-endian systems the numbers are put into big-endian
>> order when loaded into a register. Consequently, these machines do
>> arithmetic in big-endian.
>>
> Ummm... really?

Yes.

> I believe IBM computers number bits in a register with
> 0 being the most significant bit; non-IBM computers with 0 being the
> least sighificant bit.  I'd call that a bitwise little-endian.  And it gives 
> an
> easy summation formula for conversion to unsigned integers.

The endianness is determined by where the MSB and LSB are stored. On IBM
machines the MSB is in the left-most byte of the register and the LSB in
the right-most byte. That is big-endian.

Ascribing indices to the bit positions in either order makes no
difference. It is the order of *storage* that determines endianness.

>> As someone who was programming DEC PDP-11s more than 40 years ago, I can
>> assure everybody that little-endian sucks.
>>
> But do the computers care?  (And which was your first system?  Did you
> feel profound relief when you discovered the alternative convention?)

The computers perform their arithmetic in whatever byte order the
hardware designers choose.

My first system was a clone of an IBM 360. I felt dismay when I first
read a core dump from a PDP-11.

> IIRC, PDP-11 provided for writing tapes little-endian, which was wrong for
> sharing numeric data with IBM systems, or big-endian, which was wrong
> for sharing text data.

Text data were not a problem as the data were written as a byte stream.
Binary data was where endian differences arose.

Fortunately, DEC realized that their design was crap and added a
hardware instruction to put 16-bit binary integers into big-endian
order; it had the assembler mnemonic SWAB (SWAp Bytes). The company I
worked for in the 1970s exchanged data between many PDP-11s and a
central IBM 370, usually without problems.

> For those who remain unaware on a Friday:
>     https://en.wikipedia.org/wiki/Lilliput_and_Blefuscu#History_and_politics

I have long enjoyed Swift (not the programming language).
-- 
Regards,

Dave  [RLU #314465]
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
[email protected] (David W Noon)
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*

 

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to