On Thu, 15 Jun 2017 20:33:13 -0500, John Mckown
([email protected]) wrote about "Re: RFE? xlc compile option
for C integers to be "Intel compat" or Little-Endian" (in
<caajsdjg3xp-2cyaiea6xpre22z+ql60wwml2sy_o5xoo2bz...@mail.gmail.com>):

> On Thu, Jun 15, 2017 at 5:05 PM, Frank Swarbrick <
> [email protected]> wrote:
> 
>> The following link gives a few reasons why little-endian might be
>> preferred:  https://softwareengineering.stackexchange.com/questions/
>> 95556/what-is-the-advantage-of-little-endian-format.  As a human I still
>> prefer big-endian, regardless of any perceived advantages for little-endian!
>>
> 
> I must disagree with the "as a human" portion of the above. It is more a
> "as a speaker of a Western European language using Arabic numering"
> ​( in UNICODE these are called "European digits")​
> . We got our writing  direction, left to right, from the Romans (I'm not
> sure where they got it). But we got our positional numbering system from
> the Hindus via the Arabs (thus the "Arabic Numerals"). We write the most
> significant digit on the left because they Arabs did it that way. But the
> Arab languages are written right to left. So, from their view point, they
> are reading the least significant digit first. I.e. Arabic Numerals are
> written "little endian" in Arabic. Europeans just wrote it the same
> physical
> ​direction
>  because that's how they learned it. Using "little endian" is actually
> easier.

This would only be reflective of little-endian ordering if it used full
bit reversal. Computers use bits, so any Arabic ordering would require
all the bits to be reversed, not the bytes.

> How we do it now: 100 + 10 = 110. In our minds we must "align" the
> trailing digits (or the decimal point). But if it were written 001 + 01,
> you could just add the digits in the order in which we write them without
> "aligning" them in your mind. In the example, add the first two 0s
> together. Then add the second 0 & second 1. Finally "add" the last 1 just
> by writing it out. In a totally logical universe, the least significant
> digit (or bit if we are speaking binary) should be the first digit (or bit)
> encountered as we read. So the number one in an octet
> ​ (aka byte)​
> , in hex, would be written 0x10 or in binary as b'10000000'.

This is not the way computers do arithmetic. Adding, subtracting, etc.,
are performed in register-sized chunks (except packed decimal) and the
valid sizes of those registers is determined by architecture.

In fact, on little-endian systems the numbers are put into big-endian
order when loaded into a register. Consequently, these machines do
arithmetic in big-endian.

As someone who was programming DEC PDP-11s more than 40 years ago, I can
assure everybody that little-endian sucks.

> And just to
> round out this totally off topic weirdness, we can all be glad that we
> don't write in boustrophedon style
> ​ (switch directions every line)  ref: http://wordinfo.info/unit/3362/ip:21​

That's all Greek to me.
-- 
Regards,

Dave  [RLU #314465]
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
[email protected] (David W Noon)
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to