On Mon, 19 Feb 2018 14:13:27 -0600, Paul Gilmartin <[email protected]> wrote:
>How about USASCII?  Is that unambiguously the 7-bit set?
>
>I've encountered two EBCDIC C implementations.  One of them returned "true" if
>the EBCDIC character translated to ASCII was a USASCII character.  The other
>returned :true" simply if the EBCDIC code point was less than 128.

Yes, USASCII (US-ASCII) is the original 7-bit code set.

>That's *so* 20th century!  And 95 is better than 56.  And luck had little to 
>do with it;
>it was more lackadaisical design.  7-bit ASCII was extant when EBCDIC was 
>conceived.
>Prudence should have dictated that EBCDIC code points correspinding to those 95
>glyphs be kept invariant.

Perhaps, but IBM was trying to provide a proof point for the "International" 
part of its name.  I speculate that it was about reusing the existing character 
generator implementations in the field and that those were exceedingly stingy 
in terms of how many glyphs they would display.  While that explains things 
like National Use Characters, it's doesn't explain why code page designers 
would allow characters with existing assignments to just float around.  (dope 
slap)  All influenced by APL, ATMS, Selectrics, print chains, and an apparent 
lack of coordination.  And WHY would you move lower case 'a' to 0x61 when the 
code point assignments are arbitrary in the post-card reader era to begin with? 
 (baseball bat).  "Why, Santy Clause, WHY?"

Alan Altmark
IBM

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to