I think it's much more fundamental than that. At least in Western
civilizations our methods of numeric notation are essentially
"big-endian": we write numbers from left to right, most significant
digits first, and if one were asked to count the number of symbols
written down, most would instinctively count from left to right as well,
like the standard orientation of the positive x direction in Cartesian
coordinates. That corresponds to the concept of regarding higher memory
addresses as proceeding to the right.
Conventions on punched cards simply followed the same convention as
manual notation, as did hardware registers on early mainframes and the
ordering of characters and digits when mapped to memory in mainframes.
I don't think little-endian usage got popularized until it became
practical to design inexpensive mini-computers and eventually
microprocessors, and there were no doubt some design simplifications
which made using little-endian ordering a cheaper solution at the time.
An overriding design requirement in those days was to keep things simple
at the hardware level, even if it made things more difficult or
unnatural for humans that had to deal with programming and debugging.
Computers whose evolution can be traced back to early mainframes tend to
be big-endian. Computers that evolved from mini-computer,
microprocessor roots tend to use little-endian conventions.
Joel C. Ewing
On 03/08/2017 04:16 PM, Charles Mills wrote:
> Two words: punched cards.
>
> Numbers on punched cards were "big-endian." IBM was the dominant power in
> tabulating machines and never wanted to let that advantage slip away.
>
> Charles
>
>
> -----Original Message-----
> From: IBM Mainframe Discussion List [mailto:[email protected]] On
> Behalf Of John McKown
> Sent: Wednesday, March 8, 2017 12:32 PM
> To: [email protected]
> Subject: Re: curious: why S/360 & decendants are "big endian".
>
> On Wed, Mar 8, 2017 at 1:39 PM, Tom Marchant <
> [email protected]> wrote:
>
>> On Wed, 8 Mar 2017 11:33:31 -0700, Paul Gilmartin wrote:
>>
>>> It probably save hardware to decrement as well as increment in
>>> accessing storage. Consider that CLC goes left-to-right but AP goes
>>> right-to-left.
>> AP goes right to left because it would otherwise have to do more work
>> to propagate carry.
>>
> ​Right. But it could go to the left if the nybbles in the packed decimal
> number were in reverse order, with the sign nybble being the first
> (leftmost) nybble in the data stream. I.e. instead of 01234F be F43210 .
> But that was likely not acceptable because one reason that programmers love
> packed rather than binary is that they can read it directly in the hex dump.
> Said dump being far more prevalent tool for debugging in the far past. Some
> decisions are not really hardware dictated. They're cultural.
>
>
>> CLC goes left to right because it can stop as soon as it finds a
>> mismatch and recognize which is greater. If all you wanted to check
>> for was that the two are equal, you could go either way, but that's not as
>> useful.
>>
>> --
>> Tom Marchant
>>
>>
> ...
--
Joel C. Ewing, Bentonville, AR [email protected]
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN